handbook of software reliability engineering cd

Download Handbook of Software Reliability Engineering CD

If you can't read please download the document

Upload: vothu

Post on 08-Dec-2016

264 views

Category:

Documents


13 download

TRANSCRIPT

casre.txt

CASRE

------------------------------------------------------------------------

Several software reliability tools are currently available for

users to apply one or more of the known software reliability models

to a development effort. Popular tools include Statistical Modeling

and Estimation of Reliability Functions for Software (SMERFS),

and Software Reliability Modeling Programs (SRMP). In addition

to allowing users to make reliability estimates, these tools also

allow users to determine the applicability of a particular model

to a set of failure data.

A major issue in modeling software reliability, however, lies in

the ease-of-use of currently available tools. Nearly all current

tools have command-line interfaces, and do not take advantage of

the high-resolution displays that would allow the construction of

menu-driven or direct-manipulation user interfaces. Command-line

interfaces can make it more difficult for users to remember the

specific steps required to accomplish a task, while a menu-driven

interface can be set up to guide users through the necessary steps.

Finally, most currently available tools display their outputs in

tabular form. Although some tools provide a graphical display of

results, in most cases this is of limited utility in that the

display resolution is low since character-based graphics are used

in painting the screen. In addition, the variety of results that

may be graphically displayed is usually limited to interfailure

times or failure intensities. In measuring software reliability,

it is useful to see high-resolution displays of these quantities,

as well as cumulative number of errors, the reliability growth

curve, and the results of statistical methods used to determine

whether the model being executed is appropriate for the current

project.

CASRE is implemented as a software reliability modeling tool that

addresses the ease-of-use issue as well as other issues. CASRE is

an extension of the public-domain tool SMERFS, and is intended to

execute both in a DOS Windows environment and a UNIX X-windows

environment. The command interface is menu driven; users are

guided through the selecting of a set of failure data and executing

a model by selectively enabling pull-down menu options. Modeling

results are also presented in a graphical manner. After one or

more models have been executed, the predicted failure intensities

or interfailure times are drawn in a graphical display window.

Users can manipulate this window's controls to display the results

in a variety of ways, including cumulative number of failures and

the reliability growth curve. Users may also display the results

in a tabular fashion if they wish.

In addition, CASRE is facilitated with a useful functionality.

Namely, results from different models can be combined in various

ways to yield reliability estimates whose predictive quality is

better than the individual models themselves. CASRE is able to

increase prediction accuracy by combining the results of several

models in a linear fashion. Moreover, CASRE allows users to define

their own combinations and record them as part of the tool's

configuration. Weights for the components of the combination may

be static or dynamic, and may be based on statistical techniques

used to determine the applicability of a model to a set of failure

data. Once combination models have been defined, the steps

required to execute them are no different than executing a simple

model.

CASRE Context

There are six major functional areas of CASRE:

1. File Operations ("File" menu)

2. Editing Operations ("Edit" menu)

3. Transformation and Smoothing Operations("Filter" menu)

4. Model Selection and Application("Model" menu)

5. Program Configuration ("Setup" or "Settings" menu)

6. Help System ("Help" menu)

One additional functional area is implemented to set the

controls associated with a graphics display window in which

failure data and the results of modeling activities are

displayed. Contents of the graphics display window can also be

shown in tabular form in another text window.

1. File Operations

There are six operations that can be performed by selecting

items from the File pull-down menu. These are: "Open", "Save",

"Save As", "Setup Printer", "Print", and "Exit". The defini-

tion and usage of these operations follow the usual conventions

in modern user-interface designs.

The File menu appears underneath the File button on the main

menu bar when the File button is selected and clicked. As long

as the File button remains selected, users can navigate through

the File menu. Menu item selection is accomplished by pointing

to an item in the menu and then releasing the mouse button.

2. Editing Operations

Users will have available to them four editing operations which

allow them to modify the currently displayed work space. These

are:

1. Undo - allows users to undo the most recent CASRE operation.

2. Convert Type - allows users to convert failure data in the

form of interfailure times to failure counts and test interval

lengths, and vice versa.

3. External Editor - allows users to invoke a preferred text edi-

tor, word processor, or other application from within CASRE.

The application is selected from a user-configurable menu.

4. Escape to DOS - allows users to temporarily escape to DOS and

execute DOS commands. Users may then re-enter CASRE by typing

"exit" at the DOS prompt.

3. Transformation and Smoothing Operations

Filtering operations allow users to perform global modifications to

the failure data displayed in the work space. Since failure data

is frequently noisy, and may be noisy to the point that it is dif-

ficult or impossible to estimate model parameters, users may wish

to apply a smoothing operation to the data to remove some of the

noise. Users may also wish to apply other transformations which

will allow them change the shape and position of the failure data.

For instance, one of the transformations can change exponential

data to a straight line.

There are three types of filtering operations; these are transfor-

mation, smoothing, and data subsetting. Multiple transformation

and smoothing filters may be applied to the data in a pipeline

fashion. There is also a "Undo All Filters" facility allowing

users to remove all of the filters that have been applied to the

data. The filtering capabilities are as follows:

o Affine Transformations

- Scaling and Offset - (X(i) * A) + B

- Power - X(i) ^ A

- Logarithmic - ln(A * X(i) + B)

- Exponential - exp(A * X(i) + B)

o Smoothing (to use "Hann Window")

o Subset Data (to select severity)

o Remove All Filters

4. Model Selection and Application

CASRE allows users to select and apply existing software reliabil-

ity models to the data displayed in the work space. These models

include Farr Survey

1. Brooks and Motley Model (BM)

2. Geometric Model (GM)

3. Goel-Okumoto (GO)

4. Jelinski-Moranda (JM)

5. Littlewood Model (LM)

6. Littlewood-Verrall (LV)

7. Musa-Okumoto (MO)

8. Generalized Poisson Model (PM)

9. Schneidewind Model (SM)

10. Yamada Delayed S-Shape Model (YM)

Users are also allowed to define combinations of existing models,

edit specifications of user-defined models, and remove user-defined

models from the menu of available models. Users may also specify

the parameter estimation method, the confidence bounds that will be

reported for model parameters, and the amount of time into the

future for which reliability predictions will be made. The Model

menu items are:

1. Model Selection - allows users to select and apply one or more

software reliability models to the failure data displayed in

the work space.

2. Model Definition - allows users to define combination models

to supplement models already provided with the tool. User-

defined models remain available during the current and subse-

quent sessions.

3. Model Editing/Model Removal - allows users to change or remove

descriptions of combinations that were previously created

using the "Model Definition" capability. Only user-defined

combinations of models can be changed or removed.

4. Parameter Estimation - allows users to select the method of

parameter estimation that will be used. The choices are max-

imum likelihood (default) and least squares.

5. Predictions - allows users to specify an interval of time over

which predictions about future reliability behavior will be

made.

5. Program Configuration

Two operations, "Add External Configuration" and "Remove External

Configuration," are available from this manual. The former allows

users to add the name of an application to the "External Editor"

submenu from which external applications can be invoked. The later

allows users to remove the name of an application from the "Exter-

nal Editor" submenu from which applications can be invoked.

6. Help System

The help system provides context-sensitive on-line assistance to

users by allowing them to search for and read descriptions of the

major CASRE functional areas.

7. Displays

The graphics display window is invoked when the "Open" button under

main window's "File" menu operation is selected. The graphics

display window plots the failure data displayed in the work space

and the results of applying models to that data. A separate menu

bar is associated with this window, allowing users to control the

contents and appearance of the display or send the contents of the

window to an output device (disk file or printer). The menu items

on the main menu bar are:

1. Plot - There are 6 plot operations ("Save", "Save As", "Redraw

Data", "Redraw Results", "Draw from File", "Printer

Setup/Print") that can be selected for file manipulation and

drawing.

2. Model Result Selection - Multiple sets of model results can be

displayed in the graphics display window. This capability

allows users to specify the models whose results should be

plotted in that window.

3. Display Type - Modeling results and evaluations of models can

be displayed in a variety of ways. The following types of

model results can be plotted in the graphics window:

o Interfailure times

o Number of failures per test interval

o Test interval lengths

o Cumulative number of failures

o Reliability growth function

In addition to estimating reliability functions, users may wish

to determine the applicability of the model(s) being run to the

current set of failure data. The following types of model

evaluation results can be drawn in the graphics window:

o Goodness-of-fit tests - Chi-Square or Kolmogorov-Smirnov

o Current value(s) of the prequential likelihood

o Prediction noisiness

o u-plot

o y-plot

o Scatter-plot of u's

o Model rankings

4. Scaling - allows users to scale the x and y axes of the graph-

ics display window and to shift its origin.

5. Table - allows users to view the modeling results in the

graphics display window in a tabular form. This table shows

the detailed data on which plotted model results, evaluations,

and rankings are based.

------------------------------------------------------------------------

CASRE On-Screen Appearances

Figures 1-8 in the following show a series of screen dumps for the

described CASRE tool. It can be seen that the application of

models to failure data is a straightforward process. Users are

also given a considerable amount of choice in the models to be

applied. This combination of simple operation and variety in the

available models makes it easy for users to identify an appropriate

model for a particular development effort or investigate a family

of models.

o Screen 1 - opening a failure data file

The screen is shown in Figure 2 in the Appendix. To choose a set

of failure data on which a reliability model will be run, users

select the "File" menu with the mouse. After selecting the "Open"

option in the File menu, a dialogue box for selecting a file

appears on the screen. The current directory appears in the edit-

able text window at the top of the dialogue box. The failure his-

tory files in that directory are listed in the scrolling text win-

dow. Users select a file by highlighting its name (scrolling the

file name window if necessary) and then pressing the "Open" button.

To change the current directory, users enter the name of the new

directory in the "Current Directory" window and presses the "Change

Directory" button. Pressing the "Cancel" button removes the dialo-

gue box from the screen.

o Screen 2 - initial failure data display

The screen is shown in Figure 3 in the Appendix. After opening a

failure history file from the "File" menu, the contents of the file

are displayed in tabular and graphic forms. The tabular represen-

tation resembles a spreadsheet, and users can perform similar types

of operations (e.g. selecting a range of data, deleting one or more

rows of data). All of the fields can be changed by users except

for the "Interval Number" field (or "Error Number" field if the

data is interfailure times). In this example, the selected data

set is in the form of test interval lengths and number of failures

per test interval. Users can scroll up and down through this tabu-

lar representation and resize it as per the MOTIF conventions.

The large graphics window displays the same data as the worksheet.

If the failure data set is interfailure times, the initial graphi-

cal display is interfailure times. If, as in this example, the

failure data set is test interval lengths and failure counts, the

initial graphical display is the number of failures per test inter-

val. The display type can be changed by selecting one of the items

from the "Display Type" menu associated with the graphics window.

Users can move forward and backward through the data set by press-

ing the right arrow or left arrow buttons at the bottom of the

graphics window. Finally, the iconified window at the lower left

corner of the screen lists the summary statistics for the data. To

open this window, users just clicks on the icon.

o Screen 3 - selecting failure data range

The screen is shown in Figure 4 in the Appendix. Users will fre-

quently use only a portion of the data set to estimate the current

reliability of the software. This is because testing methods may

change during the testing effort, or different portions of the data

set may represent failures in different portions of the software.

To use only a subset of the selected data set, users may simply

"click and drag" on the tabular representation of the data set to

highlight a specific range of observations. Users may also select

previously-defined data ranges. To do this, users choose the

"Select Range" option of the Edit menu. This brings up a dialogue

box containing a scrolling text window in which the names of

previously-defined data ranges and the points they represent are

listed. To select a particular range, users highlight the name of

the range in the scrolling text window and presses the "OK" button.

Pressing the "Cancel" button removes the dialogue box and the Edit

menu from the screen.

Once a range has been selected, all future modeling operations will

be only for that range. The selected data range is highlighted in

the tabular representation. The graphics display will change to

include only the highlighted data range. All other observations

will be removed from the graphics display.

o Screen 4 - data filtering

The screen is shown in Figure 5 in the Appendix. After selecting a

data range, users may wish to transform the file or smooth the

data. Software failure data is frequently very noisy; smoothing

the data or otherwise transforming it may improve the modeling

results. To do this, users select one of the options in the

"Filter" menu. There are five affine transformations which users

may apply to the data, and six types of smoothing. Transformations

and smoothing operations may be pipelined - for example, users

could select the "ln(A * X(i) + B)" transformation followed by the

B-spline smoothing operation. The number of filters that may be

pipelined is limited only by the amount of available memory. The

tabular representation of the failure is changed to reflect the

filter, as is the graphical display of the data. The type of

filter applied to the data is listed at the right hand edge of the

graphics display window.

Once a series of filters has been applied to the data, users may

remove the effect of the most recent filter by selecting the "Undo"

option of the Filter menu. To remove the effect of the entire

series of filters, users select the "Undo All Filters" option of

the Filter menu.

o Screen 5 - applying software reliability models

The screen is shown in Figure 6 in the Appendix. After users have

opened a file, selected a data range, and done any smoothing or

other transformation of the data, a software reliability model can

be run on the data. In the Model menu, users have the choice of 10

individual models or a set of models which combine the results of

two or more of the individual models. Users may also choose the

method of parameter estimation (maximum likelihood or least

squares), the confidence bounds that will be calculated for the

selected model, and the interval of time over which predictions of

future failure behavior will be made.

o Screen 6 - prioritization of model selection criteria

The screen is shown in Figure 7 in the Appendix. There are many

models from which to choose in this tool. Users may not know which

model is most appropriate for the data set being analyzed. Using

CASRE, users can request, "display the results of the individual

model which best meets the four prioritized criteria of accuracy

(based on prequential likelihood), biasedness, trend, and noisiness

of prediction." To do this, a user first selects the "Individual"

option of the Model menu. A submenu then appears, on which 10

individual models are listed, as well as a "Choose Best" option.

The user selects the "Choose Best" option, which results in a

"Selection Criteria" dialogue box being displayed. The user moves

the four sliders in this dialogue box back and forth to establish

the relative priorities of the four criteria. Numerical values of

the priorities are displayed in the text boxes on the right side of

the dialogue box. Once the priorities have been established, the

user presses the "OK" button. CASRE then proceeds to run all of the

individual models against the data set, first warning the user that

this is a time-consuming operation and allowing cancellation of the

operation. If the user continues, CASRE provides the opportunity

for cancellation at any time if the user decides that the operation

is taking too much time.

o Screen 7 - display of model results

The screen is shown in Figure 8 in the Appendix. Once a model has

been run on the failure data, the results are graphically

displayed. Actual and predicted data points are shown, as are con-

fidence bounds. The model is identified in the window's title bar;

the percent confidence bounds are given at the right side of the

graphics window. This concludes one round of software reliability

estimation with CASRE.

o Screen 8 - determination of model bias

The screen in which one of the result of model evaluation can be

displayed is shown in Figure 9. Statistical methods can be applied

to determine the applicability of the model to the failure data set

on which it was executed. To display the evaluation results, a

user selects the "Evaluations" pull-down menu in the graphics

display window's main menu bar. Several model evaluation methods,

including u-plots and y-plots, are available to users. In this

example, the user have chosen the "U-Plot" menu item. The u-plot,

which indicates biases in the model, is displayed on screen. CASRE

also indicates whether the model has an optimistic bias (predic-

tions of time to the next failure tend to be greater than observed

inter-failure times) or a pessimistic bias. To return to the

display shown in the figure, the user may select the "Display Type"

pull-down menu and choose the desired type of reliability-related

display.

smerfs.txt

SMERFS

------------------------------------------------------------------------Note: For more details on SMERFS, click www.slingcode.com/smerfs

------------------------------------------------------------------------

SMERFS (Statistical Modeling and Estimation of Software Reliability

Functions) is a program for estimating and predicting the reliability

of software during the testing phase. It uses failure count information

to make these predictions.

To run SMERFS from this control panel, the user supplies five pieces

of information. These are the name of the history file, the name of

the assumptions file, the name of the input data file, the name of the

output data file, and the type of data contained in the input data file.

The history file is an output file that will be created by SMERFS. It

is a trace file that contains all of the user input and SMERFS outputs

for a particular run so that the user can go back and look at the run

at a later time. USERS WILL USUALLY WANT TO PRODUCE A HISTORY FILE,

SINCE INFORMATION SUCH AS MEAN-TIME-TO-FAILURE PLOTS ARE SAVED IN THIS

FILE.

The assumptions file is an input to SMERFS which is used when the

user queries SMERFS about the characteristics of a particular software

reliability model. SMERFS uses the contents of this assumptions file

to reply to the user's query.

The input data file contains the failure history data on which SMERFS

will actually operate to produce the reliability estimates and predictions.

The user must also specify the type of data contained in the input data

file by clicking the appropriate radio button. If the selected data type

does not correspond to the type of data actually in the input file, the

estimates and predictions made by SMERFS will not be valid.

The output data file is a file that the user can specify to which

SMERFS will write failure history data created or edited by the user

during the current SMERFS session. This is different from the history

file described above, since the history file is a trace file which re-

cords all user input and SMERFS responses. The output data file is

actually a failure history file, created by the user, which can be

used in subsequent sessions as an input data file.

The user can specify these files in the text entry areas. If they are

specified as file names without a path name in front of them, SMERFS will

assume that these files are in the current directory. These files may

also be specified as FULLY QUALIFIED file names if the user wants to make

use of files that are in directories other than the current one. The

character length limit for file names is 128 characters in this im-

plementation. Note the following:

1.The user can elect to not produce a history file by

clicking the "none" button associated with the history

file name text box.

2.The user can elect to input failure history data from

the keyboard rather than reading it in from an input

data file. This is done by clicking the "keybd" button

associated with the input data file name text box. If

keyboard input is selected, the user must still specify

the type of failure data to be input by clicking one of

the four "data type" radio buttons described above.

IN THIS IMPLEMENTATION OF THE SMERFS CONTROL PANEL, KEY-

BOARD INPUT AND DATA FILE INPUT ARE MUTUALLY EXCLUSIVE

OPTIONS.

3.The user can choose to specify no output data file by

clicking the "none" button associated with the output

data file name text box. IT SHOULD BE NOTED THAT IT

IS POSSIBLE TO SPECIFY KEYBOARD INPUT AND NO OUTPUT

FILE. THIS IS GENERALLY UNWISE, HOWEVER, SINCE ANY

DATA CREATED BY THE USER IN THIS SITUATION WILL NOT

BE SAVED FOR FUTURE USE.

If users want to change directories, they click the "Change Directory"

button. A dialog box containing a text entry area, an "OK" button, and

a "CANCEL" button will appear. The user will then enter the name of the

directory to change to in the text entry area. Pressing the "OK" button

will cause a directory change to that directory, while pressing "CANCEL"

will cancel the operation and leave the user in the directory in which he

started. If the user enters the name of a non-existent directory or the

name of a directory which he cannot access, error message boxes with the

appropriate error messages will appear. When a successful directory change

has been made, the new directory name will appear in the "Current Directory"

text window in the main control panel. Directory names are limited to

128 characters.

In this implementation, users need to run SMERFS from a directory in

which they have write permission. This is because SMERFS creates tempor-

ary history files in the current directory, even if the user has requested

that no history files be produced. The temporary history files are removed

at the end of the current SMERFS session. However, SMERFS must be able to

create these temporary files during the session.

The following table gives a list of assumption files, data sets, and

sample SMERFS sessions that are available for the user to inspect or use

or use while running SMERFS.

DATA SET NAME

interval1.datinterval2.datcpudata1.dat

=========================================================

Assumptions

File:smerfs3.assumsmerfs3.assumsmerfs3.assum

Sample

Scenario:interval1.hisinterval2.hiscpudata1.his

The assumptions file is used to actually run the model. The sample

scenario is a file that the user can view to see what SMERFS actually

looks like when it's running. Interval1.dat and interval2.dat are

"test interval length/failure count" type of data,, while "cpudata1.dat"

is of the type "Execution time between failures." All of these files

are available in the ~cs577/CASE/reliability directory.

For more information on SMERFS, consult the SMERFS user's guide. This

is a technical report available from the Naval Surface Weapons Center,

NSWC TR 84-373, "Statistical Modeling and Estimation of Reliability

Functions for Software (SMERFS) User's Guide." (Revision 3 published in

1993.) For more information on software reliability modeling in general,

refer to Chapter 4 "Software Reliability Modeling Survey" of this

Handbook (Handbook of Software Reliability Engineering), as well as

"Software Reliability: Measurement, Prediction, Application" by John

Musa, Anthony Iannino, and Kazu Okumoto, also published by McGraw-Hill

in 1987. In addition, The IEEE has a committee devoted to Software

Reliability Engineering which sponsor annual International Symposium

on Software Reliability Engineering (ISSRE).

softrel.txt

SoftRel: The Simulation Technique

------------------------------------------------------------------------

Overview

The software reliability process simulator SoftRel does not assume

staff, resources, or schedule models, but provides for quintuple

inputs for them. The simulator also captures the effects of

interrelationships among activities, and characterizes all events

as piecewise-Poisson Markov processes with explicitly defined event

rate functions, as explained in the Chapter 16 of this Handbook.

The set of adjustable parameters input to SoftRel is called the "model";

the set of event status monitors that describes the evolving process at any

given time is called the set of "facts". The "model" and "facts"

structures are defined so as to accommodate multiple categories of

classes of events in the subprocesses of the overall reliability process,

with each "model"-"facts" pair representing a separate class of events.

Because of the usual assumption that event processes are independent,

the same simulation technique could be applied simultaneously by using

separate computer processors running the same algorithms for each class.

If only a single processor were to be used, the same algorithms could

be applied to each class separately, but interleaved in time, or else they

could be run entirely separately. In entirely separate executions,

the sets of results would be merged later into a proper time sequence.

For simplicity, in its initial form, the simulator reported here only

accommodates a single category of events for each of the reliability

subprocesses. Separate runs using different "model" parameters can be

later merged to simulate performance of a single process that has multiple

failure categories, if desired. Extension of SoftRel to accommodate the

more general case is not conceptually difficult, but has not yet been

undertaken. Later versions may possibly include multiple failure categories,

should this feature prove beneficial.

SoftRel simulates two types of failure events, namely, defects in

specification documents and faults in code, all considered to be in

the same seriousness category, as reflected by the single set of "model"

parameters. As an aside, we note that the seriousness category is often

indicated by the probabilities of observation and outage, and the lengths

of outages: a process with these quantities high will have highly visible

and abortive failures, whereas when these probabilities are low, the

process will have rarely noticed, inconsequential failures.

The ``documentation'' currently simulated by SoftRel consists only

of requirements, design, interface specifications, and other entities whose

absence or defective nature can beget faults into subsequently produced code.

Integration and test procedures, management plans, and other ancillary

documentation, when deemed not to correlate directly with fault generation,

are excluded. The presumption is that the likelihood of a fault

at any given time increases proportionately to the amount of documentation

missing or in error.

SoftRel does not currently simulate the propagation of missing and

defective requirements into missing and defective design and interface

specifications; both requirements analysis and design activities are

currently combined in the document construction and integration

phases. All defects occur either in proportion to the amount of new and

reused documentation, to the amount that was changed, deleted, and added,

or to the number of defects that were reworked.

------------------------------------------------------------------------

The Simulation Input Parameters

The reliability process in SoftRel is fairly comprehensive with

respect to what really transpires during software development.

The capability to mirror that process in a simulator will require a

large number of parameters relating to the ways in which people and

processes interact.

The SoftRel "model" parameters are the following:

Model parameters (fixed per execution):

dtsimulation time increment, days

workday_fractionaverage calendar days per day worked

doc_new_size new documentation units

doc_reuse_basereused documentation units

doc_reuse_deletedreused documentation units deleted

doc_reuse_added documentation units added to reuse base

doc_reuse_changeddocumentation units changed in reuse

doc_build_ratenew documentation units/workday

doc_reuse_acq_ratereused documentation acquisition units/workday

doc_reuse_del_ratereused documentation deletion units/workday

doc_reuse_add_ratereused documentation addition units/workday

doc_reuse_chg_ratereused documentation changed units/workday

defects_per_unitdefects generated/new documentation unit

reuse_defect_ratereused documentation indigenous defects/unit

del_defect_rate defects inserted/deleted reused unit

add_defect_rate defects inserted/addition to reused unit

chg_defect_rate defects inserted/changed reused unit

hazard_per_defectdocumentation hazard units added or removed

per defect

new_doc_inspect_fracfraction of new documentation inspected

reuse_doc_inspect_fracfraction of reuse documentation inspected

insp_doc_units_per_workdayinspected documentation units/workday

inspection_limitrelative number of defects that can be

removed by inspection

find_rate_per_defect rate of defect discovery per hazard unit

per documentation unit

defect_fix_rate corrected documentation defects/workday

defect_fix_adequacytrue documentation fixes/correction

new_defects_per_fix defects created/correction

doc_del_per_defectdocumentation units deleted/correction

doc_add_per_defectdocumentation units added/correction

doc_chg_per_defectdocumentation units changed/correction

code_new_size new code units

code_reuse_base reused code units

code_reuse_deleted reused code units deleted

code_reuse_addedcode units added to reuse base

code_reuse_changedcode units otherwise changed in reuse base

code_build_rate new code units/workday

code_reuse_acq_ratereused code acquired, units/workday

code_reuse_del_ratereused code deletions, units/workday

code_reuse_add_ratereused code additions, units/workday

code_reuse_chg_ratereused code changed, units/workday

faults_per_unit faults generated/code unit

reuse_fault_rateindigenous reused code faults/code unit

del_fault_ratefaults inserted/deleted code unit

add_fault_ratefaults inserted/added code unit

chg_fault_ratefaults inserted/changed code unit

faults_per_defectnumber of code faults/defect

miss_doc_fault_ratefaults/code unit generated per missing

documentation fraction

hazard_per_faultcode hazard units added or removed per fault

new_code_inspect_fracfraction of new code inspected

reuse_code_inspect_frac fraction of reused code inspected

insp_code_units_per_workdayinspected code units/workday

find_rate_per_faultfraction of faults detected per inspected unit

fault_fix_ratecode faults "corrected"/workday

fault_fix_adequacytrue fault fixes/"correction"

new_faults_per_fixfaults created/"correction"

code_del_per_faultcode units deleted per fault "correction"

code_add_per_faultcode units added per fault "correction"

code_chg_per_faultcode units changed per fault "correction"

tests_gen_per_workday test cases/workday

tests_used_per_unittest cases used/resource unit

failure_rate_per_faultfailures/resource unit/fault density

miss_code_fail_ratefailures per resource unit per missing code

fraction

prob_observationprobability that failure is observed

prob_outageprobability that a failure causes outage

outage_time_per_failure delay caused by failure, days

analysis_rate failures analyzed/workday

analysis_adequacyfaults recognized/fault analyzed

repair_ratefault "repairs"/workday

repair_adequacy true repairs/"repair"

new_faults_per_repairfaults created/"repair"

validation_rate "repairs" validated/workday

find_rate_per_fixdetected bad repairs/repair validation

retest_rate retested faults/resource unit

retest_adequacy detected bad repairs/retest/unrepaired fault

scheduleschedule_item list:

(t_begin, t_end, event, staff, resources)*

packets.

When the work effort expended by an activity is needed, it may be

computed by using the instantaneous staffing, or work force,

function s(alpha, t) defined for each such activity alpha over the time

periods of applicability. The corresponding work effort w(alpha, T)

over a time interval (0, T), for example, is

w(alpha, T) = _0^T s(alpha, t) dt

In SoftRel, s(alpha, t) is coded as "staffing(A, p, M)", where "A" is

the activity, "p" points to a "facts" structure, and "M" points to a

"model".

Similarly, if computer CPU time, or another computer resource, is

required in calculating the event-rate functions above, it is found

through the conversion function q(alpha, t), which is defined for each

activity alpha as the CPU or resource utilization per wall-clock day.

The CPU resource usage over the time interval (0, T), for activity

alpha, for example, is

T_cpu(alpha, T) = _0^T q(alpha, t) dt

The function q(alpha, t) in SoftRel appears as "resource(A, p, M)",

with the same arguments as "staffing", above.

The number of wall-clock days may be interpreted either as

literal calendar days, or as actual workdays. These alternatives are

selected by proper designation of the "model" parameter, "workday_fraction".

A value of unity signifies that time and effort accumulate on the basis

of one workday effort per schedule day per individual. A value of

5/7 means that work effort and resource utilization accumulate on the

average only during 5 of the 7 days of the week. A value of 230/365

denotes that 230 actual workdays span 365 calendar days. These

compensations are made in the "staffing" and "resource" functions, above.

Activities of the life cycle are controlled by the staffing function.

No progress in an activity takes place unless it has an allocated work

force. If, however, staffing is non-zero, event rates involve s(alpha, t)

when work effort dependencies exist, and q(alpha, t) when CPU

dependencies are manifest.

Staffing and computer resource allocations in the "model"

are made via the "schedule" list of "schedule item" packets,

each of which contains

"activity" & = & index of the work activity

"t_begin" & = & beginning time of the activity, days

"t_end" & = & ending time of the activity, days

"staffing" & = & staff level of the activity, persons

"cpu" & = & resources available, units per day

"next" & = & pointer to next "schedule item" packet

The entire list is merged by the staffing and resource-utilization

functions, s and q, or "staffing" and "cpu" in the program, to provide

scheduled workforce and computer resources at each instant of time

throughout the process. Both "staffing" and "cpu" express resource

units per project day. If the schedule quintuples include weekends,

holidays, and vacations, then staff and resource values must be

compensated so that the integrated staff and resources over the project

schedule are the allocated total effort and resource values. This is

done via the parameter "workday_fraction" discussed above.

------------------------------------------------------------------------

Event Status Monitors: Output

The event status indicators of interest, or "facts", during the

reliability process are the time-dependent values

Project Status ("facts" output for each dt iteration):

active Project is active if true, else completed

DU Total documentation units goal

DU_t Total number of documentation units built

DU_n New documentation units

DU_r Acquired reused documentation units

DU_rd Reused documentation deleted units

DU_ra Reused documentation additional units

DU_rc Reused documentation changed units

E_d Human errors putting defects in all documentation

E_dn Human errors putting defects in new documentation

E_dr Human errors putting defects in reused documentation

DH Total documentation hazard

DH_n Hazard in new documentation

DH_r Hazard in reused documentation

DI_t Inspected portion of all documentation

DI_n Inspected portion of new documentation

DI_r Inspected portion of reused documentation

D Documentation defects detected

d Documentation defects corrected

CU Total code units goal

CU_t Total code units built

CU_n New code units

CU_r Acquired reused code units

CU_rd Reused code deleted units

CU_ra Reused code additional units

CU_rc Reused code changed units

E_f Human errors putting faults in all code

E_fn Human errors putting faults in new code

E_fr Human errors putting faults in reused code

CH Total code hazard

CH_n New code hazard

CH_r Reused code hazard

CI_t Inspected portion of all code

CI_n Inspected portion of new code

CI_r Inspected portion of reused code

e Code faults detected in inspection

h Code inspection faults corrected (healed)

C Test Cases prepared

c Test cases expended

F Failures encountered during testing

A Failures Analyzed for fault

f Faults isolated by testing

w Faults needing rework, revalidation, etc.

u Number of faulty repairs

R Number of fault repairs undertaken

V Validations conducted of fault repairs

RT Retests conducted

r Faults actually repaired

rr Faults re-repaired

outage Total outage time due to failure

t Current accumulated time

T[N] Time by activity array

W[N] Work effort by activity array

cpu[N] CPU/resource usage by activity array

``Documentation units'' and ``code units'' are typically counted in

pages of specifications and lines of source code, but other conventions

are acceptable, provided that rate functions and parameters of the

"model" are consistently defined.

Other status metrics "facts" of interest are

"t" & = & Current time.

"T[i]" & = & Cumulative time consumed by activity "i".

"W[i]" & = & Cumulative work effort consumed by activity "i".

"cpu[i]" & = & Cumulative CPU or other computer resource consumed

by activity "i".

"outage" & = & Total outage time due to failure.

"active" & = & Boolean indicator, true if the process has not yet

terminated.

Note that the time-related activities above which measure times in days

are expressed as elapsed wall-clock time. Conversions to effort in

workdays and to CPU (or other) computer resource utilization in

resource-days are "model"-related

and addressed previously.

sretools.txt

AT&T SRE Toolkit

------------------------------------------------------------------------

SOFTWARE RELIABILITY ENGINEERING

Reliability Estimation Tools

SRE TOOLKIT

_____________________________________________________

Introduction

The reliability estimation tools described in this

guide are particularly useful during system test

and field trial This version of the SRE TOOLKIT

contains the standard release of the reliability

estimation tool EST, the graphics support tool PLOT

and a number of prototype tools used in conjunction

with exercises in course SN9110 (Software Reliabil-

ity Engineering - Application) provided through

Kelly Education and Training Center. This guide is

a reference guide for using the standard tools EST

and PLOT. Programmer Notes on using the prototype

tools are provided in Appendix B. As such, they

are not intended as a training or tutorial guide on

Software Reliability Engineering and should be used

along with training from course SN9110.

The reliability estimation tool EST described in

this guide is particularly useful during system

test and field trial phases of software product

development. During these phases, failure events

are encountered and the underlying faults that

cause the failures are being removed. This results

in "reliability growth" during product test or tri-

al. The tools implement techniques discussed in

reference [MIO] to estimate the current level of

software product reliability and to predict the

remaining time to attain a specified reliability

objective.

The tool EST discussed in this guide can be used to

"fit" one of two reliability models described in

reference [MIO] to failure data. In turn, EST uses

the "fitted" model to estimate several useful reli-

ability measures such as present failure intensity,

remaining time to reach a specified failure inten-

sity objective and a calendar date when a failure

intensity objective will be met. EST produces its

Page 1. Iss. 1D1

Reliability Estimation Tools

SRE TOOLKIT

output as a tabular report and a file of "plot"

commands. The tool PLOT in turn takes the file of

plot commands and produces a set of plots on a

graphics media. Graphic media currently supported

set of plots on a graphics medium. Graphic media

currently supported include postscript printers

accessible through UNIX8r9 systems or graphics moni-

tors on PC's running under MS-DOS8r9.

There are two versions of the tools, one version

runs under the UNIX There are two versions of the

tools: one version runs under the UNIX operating

environment while the other version runs under the

MS-DOS operating environment. Both versions of the

tools have been carefully engineered so that the

format of input and output data, plots, and general

use of the tools is the same under either UNIX or

MS-DOS. This provides considerable flexibility to

the user. A user can distribute work between a

large shared UNIX system to take advantage of its

support facilities and a small PC workstation to do

quick turn-around "what if" analysis of collected

data. The user can use terminal emulator programs

(such as "ctrm") to up-load and down-load input and

output files between a UNIX system and a PC works-

tation to take advantage of particular facilities

available on each system. A user can distribute

work between a large shared UNIX system taking

advantage of its support facilities and a small PC

work-station obtaining a quick turn-around "what

if" analysis of collected data by using the

screen-graphics capabilities of the MS-DOS version.

Then, the user might use a terminal emulator pro-

gram (such as "ctrm") to up-load input data files

to a UNIX system and take advantage of documenter

and printer facilities to produce an output report

including graphical output from the tools.

This guide is a reference guide for using the tools

and is not intended as a training or tutorial guide

on Software Reliability Engineering. This guide

should be used in conjunction with training from

This guide should be used with training from course

SN9110 (Software Reliability Engineering - Practi-

cal Applications) Engineering - Application) pro-

vided through Kelly Education and Training Center

_____________________________________________________

Organization

of this

Guide

The remainder of this guide is divided into two

parts. The remainder of this guide is divided into

three parts. The first part contains information

on required hardware and operating environments for

running the tools, instructions on installing the

tools, instructions on getting started and tips on

using the tools. At the end of the first part is

information on what is available in the way of

training, project support, tool support and refer-

ences.

The second part (in the Appendix) is a set of manu-

al pages providing a detail The second part is a

set of manual pages in Appendix A providing a de-

tail reference on using the tools themselves. The

manual pages provide examples of inputs to and

resulting outputs from the tools. The manual pages

also provide pointers back into reference [MIO] for

further information on Software Reliability itself,

on the input data that is needed by the tools and

on interpreting output of the tools. on Software

Reliability Engineering itself, on the input data

that is needed by the tools and on interpreting

output of the tools.

The third part is a set of Programmer Notes in Ap-

pendix B on running a set of prototype tools

developed for SN9110. There are also UNIX and MS-

DOS versions these tools. Included with the Pro-

grammer Notes are a set of Manual Pages for using

these tools. Again, the manual pages provide exam-

ples of inputs to and resulting outputs from the

tools.

_____________________________________________________

Hardware and

Software

Requirements

The UNIX version of the tools runs under any ver-

sion of UNIX System V and on any version of

hardware processor that supports UNIX System V.

Care was exercised to use a restricted set of UNIX

system library calls to maintain as much portabili-

ty as possible between systems.

Iss. 4D4 Page 4.

SOFTWARE RELIABILITY ENGINEERING

The MS-DOS version should work with MS-DOS release

3.3 or greater running on any AT&T compatible PC.

To use graphics, the PC should be equipped with a

CGA, EGA, VGA or Hercules compatible graphics

board.

If extensive plotting is to be done or large

(greater than 150 failure events) failure data sets

will be analyzed, floating point hardware will sig-

nificantly reduce processing time (from minutes to

seconds). For running the tools on a PC, this would

mean investing in a numeric coprocessor (sometimes

referred to as a math coprocessor) chip. this

would mean investing in a numeric co-processor

(sometimes referred to as a math co-processor)

chip. These chips usually have a model designation

of 8xx87 or 8x87 depending on the type of processor

in your PC.

_____________________________________________________

Installation

UNIX Version - The UNIX version of the tools is

distributed using UNIX electronic mail (email) fa-

cilities. To obtain a copy, contact

Michael R. Lyu

giving your name and email address. You will

receive email back confirming the receipt of your

request and indicating when and how the tools will

be sent to you. The tools will be sent to you via

the UNIX uuto(1) command (see reference [ATT]).

command (see reference [ATTa]). You will receive email

indicating the tools have arrived on your system.

At that time, you should execute the UNIX uupick(1)

command to retrieve them. First change directories

to whichever directory you want the tools directory

SRE_TOOLS installed into, then execute uupick. At

the prompt

from system !whuxr: directory SRE_TOOLS ?

from system !mtfmi: directory SRE_TOOLS ?

type "m ." followed by a carriage return. At this

point the SRE_TOOLS directory will be installed in

the current directory you are in. Then, change

directory to the SRE_TOOLS directory and read the

file READ.ME which provides further information on

Iss. 6D6 Page 6.

SOFTWARE RELIABILITY ENGINEERING

installing the tools. Complete installation

requires approximately 5 to 15 minutes depending on

the processing speed and processing load levels of

your system.

MS-DOS Version - The MS-DOS version is distributed

either on 5-1/4 inch or 3-1/2 inch floppies. Each

floppy contains "executable" program files The MS-

DOS version is distributed on a 5-1/4 inch, 360

Kbyte double-sided, double-density floppy. The

floppy contains "executable" program files est.exe

and plot.exe and test data files tst.fp, tst.ft,

tst.pc, tst_stg.fp, tst_stg.ft and tst_stg.pc. The

floppy diskette can be inserted in the appropriate

drive, the drive selected by typing a: or b:. If

your PC has a hard disk, we recommend copying the

program and data files into a directory on your

hard disk and running the tools from your hard

disk. Otherwise, you can run the tools directly

from the floppy diskette (of course, making a

backup copy of the diskette first). Alternatively,

with the UNIX version of the delivered tools, there

is a directory named dos that contains copies of

the "executable" program and data files. These

files can be up-loaded to your PC using the file-

transfer capabilities of your favorite terminal

emulator package (such as ctrm).

___________________________________________________

__

Page 7. Iss. 7D7

Reliability Estimation Tools

SRE TOOLKIT

_____________________________________________________

Getting

Started,

Using the

Manual Pages

You might want to first review the manual pages in

the appendix. You might want to first review the

manual pages in Appendix A. Manual page EST(1)

describes the est program. First, quickly browse

the "Description" section of the manual page and

then read the "Example" section to follow the exe-

cution of a particular example. After reading the

manual page, you may want to proceed to the next

section of this guide to execute the example in

your UNIX or MS-DOS environment. If you want to

learn more about the input data files for est, then

you'll want to read the .FP(5) manual page that

describes the contents of the "failure parameter"

file and .FT(5) manual page that describes the con-

tents of the "failure time" file. Again, browse

the "Description" section of the manual page, then

read the "Example" section. Refer back to the

"Description" section read the "Example" section.

Refer to the "Description" section whenever you

need more detail in following the example. In gen-

eral, you don't need to be familiar with the

PLOT(1) manual page. However, if you reach a point

where you want to tailor some of the plots, then

you can read this manual page to see how to change

the plot commands in the associated .pc files pro-

duced by est.

The same applies to getting started with the proto-

type tools. First, review the Programmer notes to

understand the caveats in running the tools. Then

browse the "Description" section of the manual page

and concentrate on the "Example" section of the

manual page.

_____________________________________________________

Getting

Started,

Running the

Examples

UNIX Version - After installing the UNIX version of

the tools, change directories to the directory your

tools were installed in and then to the testdir

9110exer subdirectory under this directory. There

are two sets of project data provided in this

directory. The first is associated with project

tst. This data is in files tst.fp and tst.ft.

First run the est program against this project data

by typing est tst. Note the tabular output report

produced. Manual page EST(1) in the appendix can be

used to interpret the contents of this report.

Note the tabular output report produced. Manual

page EST(1) in Appendix A can be used to interpret

the contents of this report. The program creates

the plot commands in file tst.pc. Now, you can

type plot tst to generate the plots. In the UNIX

version, a file tst.po containing pic(1) and

troff(1) commands is produced (see the UNIX User

Reference Manual for more information on pic and

troff). You can now run your favorite command for

formatting "troff" text You can now run your favor-

ite command for formatting troff text files819 and

routing to output to postscript printers or other

printers with graphics capabilities with the tst.po

file. Don't forget to either first run the UNIX

pic(1) command against the file or to include the

appropriate option on the command line of the troff

text formatter to preprocess the file using the

"pic(1)" command. using the pic(1) command. The

__________

1. Examples of such commands are mmx(1), mmt(1), xroff(1).

An example invocation of such a command with the tst

data would be mmt -p tst.po where the -p option on the

command line indicates the file should first be

processed by "pic(1)". indicates the file should first

be processed by "pic(1)." indicates the file should

first be processed by pic(1). Check with your UNIX

system administrator to find out what commands are

available on your UNIX system. Reference [GE] provides

further information on document formating commands

Reference [GE] provides further information on document

formatting commands under UNIX.

Page 9. Iss. 9D9

Reliability Estimation Tools

SRE TOOLKIT

second set of project data with project name

tst_stg is the same as the first except "staged

delivery" information has been added to the failure

data (see the .FT(5) manual page in the appendix

for a description of staged delivery). manual page

in Appendix A for a description of staged

delivery). You may now want to run est and plot

programs with this project data and compare the

resulting tabular report and plots with the tst

project data.

MS-DOS Version - If you have created a directory on

your hard disk with the program and data files

included with the distribution diskette, then you

should first "change directories" to this direc-

tory. As with the UNIX version, there are two sets

of project data provided. The first is associated

with project tst. This data is in files tst.fp and

tst.ft. First run the est program against this

project data by typing est tst. Note the tabular

output report produced. Manual page EST(1) in the

appendix can be used to interpret the contents of

this report. in Appendix A can be used to inter-

pret the contents of this report. In the MS-DOS

version of the data files, the genplt parameter has

been set in the failure parameter file tst.fp file

so no plot commands are produced (this was done

because PC's not having a "math coprocessor" board

will require a long time to run). not having a

"math co-processor" chip will require a long time

to run). Instead, the "plot commands" correspond-

ing to project tst has already been created and

supplied as file tst.pc with the distribution

diskette. You may now want to run plot tst to pro-

duce the plots directly on your video monitor. The

second set of project data with project name

tst_stg is the same as the first except "staged

delivery" information has been added to the failure

data (see the .FT(5) manual page in the appendix

for a description of staged delivery). manual page

in Appendix A for a description of staged

delivery). You may now want to run est and plot

programs with this project data and compare the

resulting tabular report and plots with the tst

project data. Again, genplt has been set so no

plot commands are generated. Instead, tst_stg.pc

file has been provided with your distribution

diskette.

Iss. 10D10 Page 10.

SOFTWARE RELIABILITY ENGINEERING

_____________________________________________________

When running under MS-DOS, you can get hard-copy of

screen output by using the MS-DOS mode mode and

graphics graphics commands and a locally connected

dot-matrix printer (see you MS-DOS User's Guide).

To do this you generally execute the mode mode com-

mand to define the characteristics of your printer

and then the graphics graphics command to load a

memory-resident program. To print a copy of the

display currently appearing on your terminal's

screen, you would depress the "Prt Sc" or "Print

Screen" key (the name of this key is dependent on

exactly what type of keyboard you have).

The program plot(1) plot(1) that is provided with

this tool set is a useful tool in itself for pro-

ducing displays. The novice user need not get into

learning about the plot commands that the plot(1)

plot(1) program uses in generating plots. The ex-

pert user can begin writing analysis programs that

can generate .pc .pc files to create their own

graphs. Or, better yet, you can change .pc .pc

files created by the est(1) est(1) program to add

additional lines, points, labels on particular

graphs and so on. One approach to using the tools

is to do the heavy CPU est(1) est(1) runs on a UNIX

system with processing horse-power. Then download

Iss. 12D12 Page 12.

SOFTWARE RELIABILITY ENGINEERING

the resulting .pc .pc files to a PC, edit the .pc

.pc files to do "touch-ups" and customize graphs

and run plot(1) plot(1) on the PC to immediately

see the effects of changes to the .pc .pc files.

Finally, the .pc .pc files can then be up-loaded to

a UNIX system to runoff final reports with the

graphical output on a laser printer.

For those who may be familiar with the RELTOOLS

tool set and the reltab reltab and relplt relplt

programs, this tool set provides everything in the

way of features of these tools plus more. The

structure of the input files for est(1) est(1) pro-

gram is quite similar to the input files of reltab

reltab and relplt relplt programs. The one notable

exception is the failure time file. For the

reltab/relplt reltab/relplt programs, the

corresponding file is referred to as a failure

interval file. The times in the failure interval

file are "times between failures" rather than

actual "failure times." The format of the failure

interval and failure time files are different. The

failure parameter files for both the est(1) est(1)

and reltab/relplt reltab/relplt file are almost

identical (there are a few differences in some

parameter names).

_____________________________________________________

SOFTWARE RELIABILITY ENGINEERING

Caveats

Version 3.7 is a "Beta Trial" version of the tools

that is being made available on a "friendly user"

basis. As such, the basic functionality of the

tools has been extensively tested and results com-

pared with the predecessor software RELTAB/RELPLT.

With this version of the tool, the heuristic algo-

rithm (in EST) that determines scale values for the

X and Y axes has not been fully implemented (this

algorithm selects a scale so the X and Y scale

values printed contain only a few significant di-

gits). Also, some of the less frequently used PLOT

PARAMETER options for EST (such as charht, clopt,

conlvl, dshopt, grdopt, xwinb, xwine, ywinb, ywine)

have not been fully tested.

Users of this "Beta Trial" version of the tools are

asked to communicate any problems via email to

"whuxr!wwe" or ([email protected]).

_____________________________________________________

References

[ATT] "UNIX System V Release 3.0 User Reference

Manual", to order, "UNIX System V Release

3.0 User Reference Manual," to order,

[ATTa] UNIX System V Release 3.0 User Reference

Manual, to order, call AT&T Customer's In-

formation Center 1-800-432-6600 and order

Select Code 307-184.

[ATTb] Reliability by Design, Chapter 8, to order,

call AT&T Customer's Information Center 1-

800-432-6600.

[MIO] Musa, J. D., A. Iannino and K. Okumoto,

Software Reliability - Measurement, Predic-

tion, Application Software Reliability -

Measurement, Prediction, Application,

McGraw-Hill, 1987, ISBN 0-07-044093-X.

[MA] Musa, J and A. F. Ackerman, Quantifying

Software Validation: When to Stop Testing?,

IEEE SOFTWARE, May 1989, pg. 19-27.

[GE] Gehani, N., Document Formatting on the UNIX

System, Silicon Press, 1986, ISBN 0-

9615336-0-9.

_____________________________________________________

Handbook.zip

DATA/README.TXT

This "DATA" directory includes raw data sets studied in theHandbook of Software Reliability Engineering. They are:

Chapter 4---------SYS1 - 136 TBFsSS3 - 278 TBFsCSR1 - 397 TBFsCSR2 - 129 TBFsCSR3 - 104 TBFs

Chapter 7---------J1 - 62 failure-count dataJ2 - 181 failure-count dataJ3 - 41 failure-count dataJ4 - 114 failure-count dataJ5 - 73 failure-count dataSYS1 - 136 time-between-failures dataSYS2 - 86 time-between-failures dataSYS3 - 207 time-between-failures data

Chapter 8---------Table 8.20 - distributed system processor failure data

Chapter 9---------ODC data - 1207 defects

Chapter 10----------SS4 - 197 time-between-failures dataS27 - 41 time-between-failures dataSS1 - 81 failure-count dataS2 - 54 time-between-failures data

Chapter 11----------DataSet 1 - 140 FCsDataSet 2 - 111 TBFsDataSet 3 - 81 FCs (TROPICO R-1500)DataSet 4 - 32 FCs (TROPICO R-4096)DataSet 5 - 14 yearly FCs [Onom93]DataSet 6 - 100 failures (NASA Space Shuttle Data)

Chapter 12----------MIS data - 390 modules

Chapter 16----------Galileo CDS - 41 failure-count data (same as J3)

Chapter 17----------Data1 - 27 FailuresData2 - 136 Failures (same as SYS1)Data3 - 46 FailuresData4 - 328 FailuresData5 - 279 FailuresData6 - 3207 FailuresData7 - 535 FailuresData8 - 481 FailuresData9 - 55 Failures (199 intervals)Data10 - 198 FailuresData11 - 118 FailuresData12 - 180 FailuresData13 - 213 FailuresData14 - 266 Failures

DATA/CH9/ODC1.DAT

ODC1 data---------

This data set is taken from a large IBM project with several tens ofthousands of lines of code. This data set has 1207 data entries.Each data entry represents a detected defect during the development phase.Detailed analysis of this data set using the ODC technique is illustratedin Example 9.4.

Column 1 - Date (Sequential Day)Column 2 - Defect Type, including: "Assn" means Assignment "Ck" means checking "alg" means algorithm "Misc" means Miscellaneous etc.

-----------------------DayDefect Type-----------------------

1"Assn/Ck/alg" 6"Assn/Ck/alg" 6"Misc" 6"Assn/Ck/alg" 6"Misc" 6"Assn/Ck/alg" 6"Misc" 6"Assn/Ck/alg" 6"Assn/Ck/alg" 6"Misc" 7"Misc" 7"Misc" 7"Misc" 7"Function" 7"Assn/Ck/alg" 7"Assn/Ck/alg" 7"Misc" 7"Assn/Ck/alg" 7"Function" 7"Function" 8"Misc" 8"Assn/Ck/alg" 8"Misc" 8"Misc" 8"Misc" 8"Misc" 8"Misc" 8"Assn/Ck/alg" 8"Assn/Ck/alg" 8"Misc" 8"Misc" 8"Misc" 8"Misc" 9"Assn/Ck/alg" 9"Assn/Ck/alg" 9"Assn/Ck/alg" 9"Misc" 10"Assn/Ck/alg" 11"Misc" 11"Misc" 11"Misc" 11"Misc" 11"Assn/Ck/alg" 11"Misc" 11"Misc" 11"Misc" 11"Misc" 11"Misc" 11"Assn/Ck/alg" 11"Assn/Ck/alg" 12"Misc" 12"Assn/Ck/alg" 12"Misc" 12"Misc" 12"Assn/Ck/alg" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Misc" 12"Function" 12"Misc" 12"Misc" 13"Misc" 13"Misc" 13"Assn/Ck/alg" 13"Assn/Ck/alg" 13"Misc" 13"Assn/Ck/alg" 14"Assn/Ck/alg" 14"Function" 14"Misc" 14"Misc" 14"Function" 14"Misc" 14"Assn/Ck/alg" 14"Assn/Ck/alg" 14"Assn/Ck/alg" 14"Misc" 14"Misc" 14"Misc" 14"Misc" 14"Misc" 14"Misc" 14"Misc" 14"Misc" 15"Misc" 15"Misc" 15"Assn/Ck/alg" 15"Assn/Ck/alg" 15"Assn/Ck/alg" 15"Misc" 15"Misc" 15"Misc" 15"Assn/Ck/alg" 15"Assn/Ck/alg" 15"Assn/Ck/alg" 18"Misc" 19"Misc" 19"Function" 19"Misc" 19"Misc" 19"Assn/Ck/alg" 19"Function" 19"Misc" 19"Misc" 20"Assn/Ck/alg" 20"Misc" 20"Misc" 20"Misc" 20"Assn/Ck/alg" 20"Misc" 20"Misc" 20"Misc" 20"Misc" 20"Misc" 20"Function" 20"Misc" 20"Misc" 20"Assn/Ck/alg" 21"Assn/Ck/alg" 21"Misc" 21"Assn/Ck/alg" 21"Misc" 21"Assn/Ck/alg" 21"Assn/Ck/alg" 21"Function" 21"Assn/Ck/alg" 21"Assn/Ck/alg" 21"Assn/Ck/alg" 21"Misc" 21"Assn/Ck/alg" 21"Assn/Ck/alg" 21"Assn/Ck/alg" 22"Misc" 22"Assn/Ck/alg" 22"Misc" 22"Function" 22"Assn/Ck/alg" 22"Assn/Ck/alg" 22"Misc" 22"Assn/Ck/alg" 22"Assn/Ck/alg" 22"Assn/Ck/alg" 22"Assn/Ck/alg" 23"Misc" 23"Misc" 23"Function" 23"Assn/Ck/alg" 23"Function" 23"Assn/Ck/alg" 23"Assn/Ck/alg" 24"Assn/Ck/alg" 24"Function" 24"Assn/Ck/alg" 25"Assn/Ck/alg" 25"Assn/Ck/alg" 25"Assn/Ck/alg" 25"Assn/Ck/alg" 25"Misc" 25"Misc" 25"Assn/Ck/alg" 25"Assn/Ck/alg" 25"Misc" 25"Misc" 25"Misc" 26"Assn/Ck/alg" 26"Assn/Ck/alg" 26"Misc" 26"Misc" 26"Assn/Ck/alg" 26"Assn/Ck/alg" 26"Misc" 26"Assn/Ck/alg" 26"Misc" 26"Assn/Ck/alg" 27"Misc" 27"Misc" 27"Misc" 27"Assn/Ck/alg" 27"Function" 27"Function" 27"Function" 27"Assn/Ck/alg" 27"Function" 27"Misc" 27"Misc" 27"Misc" 27"Function" 27"Function" 27"Misc" 27"Assn/Ck/alg" 27"Function" 28"Misc" 28"Assn/Ck/alg" 28"Assn/Ck/alg" 28"Function" 28"Assn/Ck/alg" 28"Assn/Ck/alg" 28"Assn/Ck/alg" 28"Assn/Ck/alg" 28"Assn/Ck/alg" 28"Misc" 28"Assn/Ck/alg" 28"Misc" 28"Assn/Ck/alg" 28"Misc" 28"Assn/Ck/alg" 28"Misc" 28"Assn/Ck/alg" 28"Misc" 28"Function" 28"Assn/Ck/alg" 28"Assn/Ck/alg" 28"Misc" 28"Misc" 28"Misc" 29"Function" 29"Misc" 29"Misc" 29"Misc" 29"Misc" 29"Assn/Ck/alg" 29"Assn/Ck/alg" 29"Assn/Ck/alg" 29"Function" 29"Assn/Ck/alg" 29"Function" 29"Function" 29"Misc" 29"Function" 29"Function" 29"Assn/Ck/alg" 29"Assn/Ck/alg" 30"Assn/Ck/alg" 32"Assn/Ck/alg" 32"Misc" 32"Misc" 32"Function" 32"Function" 32"Assn/Ck/alg" 32"Misc" 32"Function" 32"Assn/Ck/alg" 32"Assn/Ck/alg" 32"Assn/Ck/alg" 32"Assn/Ck/alg" 32"Misc" 32"Misc" 32"Misc" 32"Misc" 32"Assn/Ck/alg" 32"Assn/Ck/alg" 32"Function" 32"Misc" 32"Misc" 32"Assn/Ck/alg" 32"Assn/Ck/alg" 32"Assn/Ck/alg" 33"Assn/Ck/alg" 33"Function" 33"Assn/Ck/alg" 33"Assn/Ck/alg" 33"Misc" 33"Assn/Ck/alg" 33"Function" 33"Misc" 33"Assn/Ck/alg" 33"Function" 33"Misc" 33"Function" 33"Assn/Ck/alg" 34"Function" 34"Assn/Ck/alg" 34"Misc" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Misc" 34"Assn/Ck/alg" 34"Function" 34"Function" 34"Misc" 34"Misc" 34"Misc" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Assn/Ck/alg" 34"Function" 34"Misc" 34"Assn/Ck/alg" 34"Misc" 35"Misc" 35"Misc" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Misc" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Misc" 35"Function" 35"Misc" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Function" 35"Misc" 35"Misc" 35"Misc" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 35"Misc" 35"Assn/Ck/alg" 35"Assn/Ck/alg" 36"Assn/Ck/alg" 36"Assn/Ck/alg" 36"Assn/Ck/alg" 36"Assn/Ck/alg" 36"Assn/Ck/alg" 36"Function" 36"Misc" 36"Assn/Ck/alg" 36"Misc" 36"Assn/Ck/alg" 36"Assn/Ck/alg" 36"Assn/Ck/alg" 37"Assn/Ck/alg" 37"Misc" 38"Assn/Ck/alg" 38"Assn/Ck/alg" 38"Function" 39"Assn/Ck/alg" 39"Assn/Ck/alg" 39"Misc" 39"Function" 39"Misc" 39"Assn/Ck/alg" 39"Assn/Ck/alg" 39"Misc" 39"Assn/Ck/alg" 39"Assn/Ck/alg" 39"Assn/Ck/alg" 39"Function" 40"Assn/Ck/alg" 40"Misc" 40"Assn/Ck/alg" 40"Misc" 40"Misc" 40"Misc" 41"Misc" 41"Misc" 41"Assn/Ck/alg" 41"Assn/Ck/alg" 41"Assn/Ck/alg" 41"Function" 41"Function" 41"Misc" 41"Misc" 41"Assn/Ck/alg" 41"Assn/Ck/alg" 41"Assn/Ck/alg" 41"Assn/Ck/alg" 41"Assn/Ck/alg" 41"Misc" 41"Assn/Ck/alg" 42"Misc" 42"Assn/Ck/alg" 42"Misc" 42"Misc" 42"Function" 42"Misc" 42"Function" 42"Assn/Ck/alg" 42"Assn/Ck/alg" 42"Assn/Ck/alg" 43"Assn/Ck/alg" 43"Assn/Ck/alg" 43"Assn/Ck/alg" 43"Misc" 43"Assn/Ck/alg" 43"Function" 43"Assn/Ck/alg" 43"Function" 43"Misc" 43"Function" 44"Misc" 44"Misc" 44"Assn/Ck/alg" 44"Assn/Ck/alg" 44"Assn/Ck/alg" 44"Assn/Ck/alg" 45"Function" 45"Assn/Ck/alg" 45"Assn/Ck/alg" 45"Assn/Ck/alg" 45"Function" 46"Assn/Ck/alg" 46"Function" 46"Assn/Ck/alg" 46"Misc" 46"Function" 46"Function" 46"Misc" 46"Misc" 46"Misc" 46"Assn/Ck/alg" 46"Assn/Ck/alg" 46"Misc" 46"Misc" 46"Function" 46"Assn/Ck/alg" 46"Misc" 46"Assn/Ck/alg" 47"Function" 47"Misc" 47"Function" 47"Function" 47"Function" 47"Assn/Ck/alg" 47"Function" 47"Assn/Ck/alg" 47"Assn/Ck/alg" 47"Assn/Ck/alg" 47"Misc" 47"Assn/Ck/alg" 47"Misc" 47"Misc" 47"Misc" 47"Misc" 47"Misc" 47"Function" 47"Misc" 47"Misc" 47"Assn/Ck/alg" 47"Misc" 47"Assn/Ck/alg" 47"Misc" 47"Assn/Ck/alg" 47"Misc" 47"Misc" 48"Misc" 48"Misc" 48"Misc" 48"Misc" 48"Function" 48"Assn/Ck/alg" 48"Assn/Ck/alg" 48"Assn/Ck/alg" 48"Misc" 48"Misc" 48"Assn/Ck/alg" 48"Assn/Ck/alg" 49"Misc" 49"Misc" 49"Assn/Ck/alg" 49"Misc" 49"Misc" 49"Misc" 49"Misc" 49"Assn/Ck/alg" 49"Misc" 49"Assn/Ck/alg" 49"Assn/Ck/alg" 49"Assn/Ck/alg" 49"Assn/Ck/alg" 49"Misc" 49"Misc" 49"Misc" 49"Assn/Ck/alg" 49"Misc" 49"Misc" 49"Assn/Ck/alg" 49"Misc" 49"Misc" 49"Misc" 49"Misc" 49"Misc" 49"Misc" 50"Assn/Ck/alg" 50"Misc" 50"Misc" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Misc" 50"Misc" 50"Misc" 50"Misc" 50"Assn/Ck/alg" 50"Misc" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Misc" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Misc" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Function" 50"Function" 50"Function" 50"Misc" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Misc" 50"Misc" 50"Assn/Ck/alg" 50"Assn/Ck/alg" 50"Misc" 50"Misc" 51"Assn/Ck/alg" 51"Assn/Ck/alg" 51"Misc" 51"Misc" 51"Assn/Ck/alg" 51"Assn/Ck/alg" 51"Function" 51"Function" 51"Function" 51"Misc" 51"Function" 51"Misc" 51"Misc" 51"Misc" 51"Misc" 51"Function" 51"Misc" 51"Assn/Ck/alg" 51"Assn/Ck/alg" 51"Misc" 51"Assn/Ck/alg" 51"Misc" 52"Misc" 52"Misc" 52"Misc" 52"Misc" 52"Misc" 52"Misc" 52"Misc" 52"Misc" 52"Misc" 53"Assn/Ck/alg" 53"Assn/Ck/alg" 53"Assn/Ck/alg" 53"Misc" 53"Assn/Ck/alg" 53"Misc" 53"Assn/Ck/alg" 53"Assn/Ck/alg" 53"Assn/Ck/alg" 53"Misc" 53"Assn/Ck/alg" 53"Function" 54"Assn/Ck/alg" 54"Assn/Ck/alg" 54"Assn/Ck/alg" 54"Function" 54"Misc" 54"Assn/Ck/alg" 54"Assn/Ck/alg" 54"Assn/Ck/alg" 54"Misc" 54"Misc" 54"Misc" 54"Misc" 54"Assn/Ck/alg" 54"Misc" 55"Assn/Ck/alg" 55"Assn/Ck/alg" 55"Misc" 55"Misc" 55"Misc" 55"Misc" 55"Misc" 55"Misc" 55"Misc" 55"Misc" 55"Assn/Ck/alg" 55"Misc" 55"Misc" 55"Misc" 56"Assn/Ck/alg" 56"Misc" 56"Function" 56"Misc" 56"Misc" 56"Function" 56"Assn/Ck/alg" 56"Assn/Ck/alg" 56"Assn/Ck/alg" 56"Misc" 56"Misc" 56"Misc" 56"Function" 56"Assn/Ck/alg" 56"Function" 56"Function" 56"Function" 56"Assn/Ck/alg" 57"Misc" 57"Function" 57"Assn/Ck/alg" 57"Function" 57"Assn/Ck/alg" 57"Function" 57"Assn/Ck/alg" 57"Function" 57"Assn/Ck/alg" 57"Assn/Ck/alg" 57"Function" 57"Assn/Ck/alg" 57"Assn/Ck/alg" 57"Assn/Ck/alg" 57"Function" 57"Misc" 57"Assn/Ck/alg" 57"Misc" 57"Assn/Ck/alg" 58"Misc" 58"Function" 59"Misc" 59"Misc" 59"Misc" 59"Misc" 59"Misc" 59"Misc" 59"Function" 59"Assn/Ck/alg" 59"Misc" 59"Assn/Ck/alg" 59"Misc" 59"Misc" 59"Misc" 59"Assn/Ck/alg" 59"Misc" 59"Misc" 59"Misc" 59"Misc" 59"Misc" 59"Assn/Ck/alg" 59"Function" 60"Assn/Ck/alg" 60"Misc" 60"Function" 60"Assn/Ck/alg" 60"Assn/Ck/alg" 61"Function" 61"Assn/Ck/alg" 61"Misc" 61"Assn/Ck/alg" 61"Misc" 61"Assn/Ck/alg" 61"Misc" 61"Misc" 61"Misc" 62"Misc" 62"Assn/Ck/alg" 62"Assn/Ck/alg" 62"Assn/Ck/alg" 62"Function" 62"Misc" 62"Misc" 62"Misc" 62"Misc" 62"Misc" 62"Assn/Ck/alg" 62"Misc" 62"Assn/Ck/alg" 62"Misc" 62"Misc" 62"Assn/Ck/alg" 63"Misc" 63"Function" 63"Assn/Ck/alg" 63"Assn/Ck/alg" 63"Misc" 63"Misc" 63"Misc" 63"Assn/Ck/alg" 63"Misc" 63"Misc" 63"Misc" 63"Misc" 63"Assn/Ck/alg" 63"Assn/Ck/alg" 63"Misc" 64"Misc" 64"Assn/Ck/alg" 64"Assn/Ck/alg" 64"Misc" 64"Assn/Ck/alg" 64"Assn/Ck/alg" 64"Assn/Ck/alg" 64"Misc" 64"Misc" 64"Assn/Ck/alg" 64"Assn/Ck/alg" 64"Misc" 64"Misc" 65"Assn/Ck/alg" 65"Assn/Ck/alg" 66"Misc" 66"Misc" 66"Misc" 66"Misc" 66"Assn/Ck/alg" 66"Function" 67"Misc" 67"Assn/Ck/alg" 67"Misc" 67"Misc" 67"Misc" 67"Assn/Ck/alg" 67"Misc" 67"Misc" 67"Misc" 67"Misc" 67"Misc" 67"Misc" 67"Misc" 67"Misc" 67"Function" 67"Function" 67"Assn/Ck/alg" 67"Misc" 67"Misc" 67"Misc" 67"Assn/Ck/alg" 67"Misc" 68"Assn/Ck/alg" 68"Assn/Ck/alg" 68"Misc" 68"Assn/Ck/alg" 68"Assn/Ck/alg" 68"Assn/Ck/alg" 68"Assn/Ck/alg" 68"Assn/Ck/alg" 68"Assn/Ck/alg" 68"Misc" 68"Function" 68"Misc" 68"Misc" 68"Misc" 68"Assn/Ck/alg" 68"Function" 68"Function" 69"Misc" 69"Assn/Ck/alg" 69"Assn/Ck/alg" 69"Assn/Ck/alg" 69"Misc" 69"Function" 69"Misc" 69"Misc" 69"Assn/Ck/alg" 69"Function" 69"Assn/Ck/alg" 69"Function" 69"Function" 69"Function" 69"Misc" 69"Assn/Ck/alg" 69"Misc" 69"Misc" 69"Function" 69"Assn/Ck/alg" 69"Function" 70"Assn/Ck/alg" 70"Misc" 70"Function" 70"Assn/Ck/alg" 70"Misc" 70"Assn/Ck/alg" 70"Assn/Ck/alg" 70"Assn/Ck/alg" 70"Misc" 70"Assn/Ck/alg" 70"Assn/Ck/alg" 70"Function" 70"Misc"