index...
TRANSCRIPT
Numbers80-20 principle, 128. See also Pareto diagrams99.999% availability, 360–361
AAbstract class, OO technology, 332Acceptance testing (vendor-developed
software), 302–304Accuracy. See ReliabilityActive backlogs, 108Active Design Reviews method, 180Activity-based costs, process improvement,
462–464Activity levels, process improvement, 462–466Addition, interval scale and, 60Alignment principle
measuring value of process improvement,447–448
overview of, 443–444Alternative-form method, reliability testing, 75Alternative hypotheses, 483American National Standards Institute
(ANSI), 86Analysis-driven data collection, 472Analysis phase, OO model, 29Analysis, TQM and, 8ANSI. See American National Standards
InstituteAPAR. See Authorized program analysis
report
Applicability, reliability models, 219Appraisal Requirements for CMMI
(ARC), 439AS/400, 141–143, 193–194Assessment. See also Malcolm
Baldrige assessment; Quality assessment; Software project assessment
vs. audit, 414–415internal assessments, 421–422ISO standard for, 414organizational-level, 416process improvement and, 454qualitative vs. quantitative, 455–456report, 429–433summary, products good enough to ship,
306–307Attenuation, 76–77Audits
vs. assessments, 414–415ISO 9000, 49quality audits, 397
Authorized program analysis report (APAR), 139–140
Availability metrics, 359–37499.999% availability, 360–361availability compared with reliability,
363defect rates and, 364–366key factors in, 360
509
Index
Availability metrics (continued)outage cause analysis, 369–370outage data collection, 369, 371–372outage indicators, 366outage tracking forms, 367–368outages in-process, 372–373productivity and, 362references, 373–374reliability and, 362–364summary, 373system availability technologies, 363–364
Average method size, 334–335
BBack end of development process, 56–57
defect tracking model for, 269quality metrics and, 57reliability models and, 256
Backlog management index (BMI)control charts and, 151software maintenance and, 106–107
Backlogs, defect. See PTR arrival and backlogmodel
Baselinesprocess capability, 450process improvement and, 455–456
Basic measurespercentages, 63–65proportions, 62–63rates, 65–66ratios, 62six sigma, 66–70
Benchmarks, 455–456. See also BaselinesBeta program outages, 372–373Black box approach, software testing, 311Blitz testing, 255BMI. See Backlog management indexBOOTSTRAP method, 416Bugs, software, 4, 86Business applications, availability, 361
CCalendar-time data, 210, 215, 252Capability
management models and, 257reliability models and, 219
Capability Maturity Model (CMM), 39–44applications of, 40–41CMM-based methods, 417–418control charts and, 149defect rates and, 365
defect removal effectiveness (DRE) and,181–183
industry leadership and, 458–459maturity questionnaire, 424process improvement and, 437–438process maturity levels, 39–40six step cycle in, 417Software Engineering Institute (SEI) and, 9
Capability Maturity Model Integration(CMMI)
model based approach to processimprovement, 438
process maturity levels, 42–44, 438staged vs. continuous representations, 42,
44, 440–441Card and Glass model, 321–322CASE tools, 238, 484Causality, 79–82
correlation and, 79criteria for measuring, 80–81
Cause-and-effect diagramsdefined, 130fishbone diagram example (Grady and
Caswell), 152–153CBA IPI. See CMM-Based Appraisal for
Internal Process ImprovementCBO. See Coupling Between Object ClassesCDF. See Cumulative distribution functionCertification, ISO standard, 414Changed source instructions (CSI), 91Check sheets. See ChecklistsChecklists, 130–133
defined, 128effectiveness of, 130–131illustration of, 129inspection checklist, 238process compliance and, 449types of, 131–133
CHECKPOINT tool, SPR, 44CK (Chidamber and Kemerer) metrics suite,
337–343applying to two companies, 338–339combining with other metrics, 342list of metrics in, 337–338validation studies, 339–343
Class hierarchy, OO technology, 332–334Classes, OO technology, 332, 347Classification, scientific, 59Cleanroom methodology, 32–35
development team control and, 32–34front-end defect removal and, 180
510 Index
Index 511
illustration of, 33overview of, 32stages of, 34–35statistical testing and, 34
Cleanroom Software Engineering, 32Cluster sampling, 379CMM. See Capability Maturity ModelCMM-Based Appraisal for Internal Process
Improvement (CBA IPI), 417–418, 438CMMI. See Capability Maturity Model
IntegrationCode development phase, 242Code inspections. See Design review/code
inspection (DR/CI)Code integration pattern, 242–245
early defect removal and, 245integration over time, 243middle of development process and, 269project-to-project comparisons, 244uses of, 244
Code reuse. See ReuseCommitment
management commitment, 474team commitment, 297
Companies, customer satisfaction with,388–389
Complexity metrics. See also Structuralcomplexity metrics
CK metrics suite, 336–337compared with reliability and quality
management models, 311cyclomatic complexity, 315–318Halstead’s software science, 314–315lines of code (LOC) metric, 312–313module design examples, 322–328references, 329structural metrics, 319–322summary, 328syntactic constructs for, 318–319system model, 320–322
Component quality analysis, 135–136,369–370
Component test (CT) phase, PTR backlogs,295
Componentsavailability, 362defined, 193testing, 17
Compression factor, reliability models,224–226
Computer science vs. software science, 314
Concepts, measurement theory, 58Concrete class, OO technology, 332Confidence intervals
reliability and, 203software development and, 482
Confidence level, survey samples and, 380–381“Conformance to requirements,” 2Construct validity, 71Content validity, 71–72Continuous representation (systems
engineering), CMMI, 42, 44, 440–441Control charts, 143–152
defect removal effectiveness, 151–152defects, 146–148defined, 130effective use of, 149–151illustration of, 129inspection effectiveness, 148–149process improvements and, 149as quantitative analysis tool, 402SPC and, 143–144, 481–482types of, 145–146uses of, 145
Control flow loops, 315–316, 318–319Correlation, 77–80
causality and, 79logical guidance for use of, 81metrics and, 76, 78–79module defect level with other metrics,
324–325outliers and, 80weak relationships and, 78
Cost effectiveness, defect removaleffectiveness (DRE) and, 177–180
Cost performance index (CPI), 152Coupling Between Object Classes (CBO), 338CPI. See Cost performance indexCPU utilization, 286–289, 301Criteria for model evaluation, 207, 218–219,
257Criterion-related validity, 71Critical problems, 124, 293, 401Crosby, P. B., 2, 7, 9CSI. See Changed source instructionsCultural aspects, TQM, 8Cumulative distribution function (CDF), 189,
208, 214CUPRIMDA, 381–382CUPRIMDSO, 4, 98Curvilinear model, size to defect rate,
312–313
Customer-Centered Six Sigma (Naumann andHoisington), 69–70
Customer problems metric, 96–99problems per user month (PUM), 96–97relationship of defects and customer
satisfaction, 97–98, 99Customer programs. See Early customer
programs (ECP)Customer satisfaction, 375–395
companies and, 388–389customer problems metric and, 97–98data analysis, 381data collection, 376–377defects and customer problems and, 99histogram analysis, 137how much is enough, 390–392market share and, 390–391metrics, 98–100quality and, 4–5references, 393–394sample size, 379–381sampling methods, 377–379small organizations and, 392–393specific areas vs. overall satisfaction,
382–388summary, 393surveys, 98, 371, 376–381UPRIMD categories, 383–388
Customerscost of recruiting vs. cost of retaining, 375feedback from, 376–377as focus of TQM programs, 8perspective on defect rates, 92–93quality evaluation by, 3, 255
Cyclomatic complexity (McCabe), 315–318correlation with LOC, 317formula for, 315program testability and maintainability,
315–316scatter diagrams, 140–141uses of, 317–318
DData
balancing analysis and collection, 472complexity, 320–321gathering and testing, 57–58quality control and, 470–472software engineering. See Software
engineering datasummarization format, 407
Data collectioncustomer satisfaction surveys and, 376–377data quality and, 470methodology (Basili and Weiss), 118questionnaire, 423
Data quality, 203, 470–472, 475Defect arrival data, 223–224. See also PTR
arrival and backlog modeldistributing overtime, 226–229exponential model and, 209OO projects and, 349–350outcome indicators and, 299reliability growth models and, 254
Defect arrival ratemachine testing and, 101–102software testing and, 279–282
Defect arrivals curve, 282Defect attributes, orthogonal defect
classification (ODC), 268Defect backlog. See PTR arrival and backlog
modelDefect data, 119–123
documentation defects, 122–123inspection defects, 119interface defects, 119logic defects, 121
Defect densitycustomer problems metric and, 97–98customer satisfaction and, 99lines of code (LOC) and, 312product quality metric and, 86–87
Defect discovery periods, orthogonal defectclassification (ODC), 267
Defect injectionactivities, 164–165by phases, 165–171
Defect origin, 159, 166, 262–263Defect origin-where found matrix, 169–171Defect prevention. See also Defect prevention
processCMM model and, 41Rayleigh model and, 236
Defect prevention process (DPP), 35–39applying to waterfall model, 38IBM’s success with, 36–37illustration of, 37key elements of, 35–36steps in, 35, 131
Defect rates. See also Six sigmaavailability metrics and, 364–366defective parts per million (DPPM), 69
512 Index
Index 513
machine testing and, 100–101mean time to failure (MTTF) and, 364OO quality and, 347–348by phase, 265–266process improvement and, 461software and, 66
Defect removalactivities, 164–165code integration pattern and, 245model evaluation and, 257patterns, 239–240phase-based, 103, 165–171Rayleigh model and, 236
Defect removal effectiveness (DRE), 159–185control charts and, 151–152cost effectiveness and, 177–180defect injection and removal activities,
164–165defect injection and removal by phases,
165–171formulas for measuring, 160–161literature review, 160–164overview of, 103–105, 159–160phase-based defect removal model, 172–174process maturity levels and, 181–183references, 184–185summary, 183–184two-phase model, 174–177
Defect removal model (DRM)cost effectiveness of, 177–180defect removal patterns and, 239–240examples of, 173overview of, 172–174two-phase model, 174–177
Defect triggers, orthogonal defectclassification (ODC), 267–268
Defective fixes, 110Defective parts per million (DPPM), 66–67Defects
backlog over time, 283–284by category (Pareto analysis), 133–134by component (Pareto analysis), 135control charts and, 146–148defined, 86distribution of, 295–296exponential model and, 209ODC classification, 267by OO class, 347relationship to failure, 87by severity (histogram analysis), 136by testing phases, 297
tracking and reporting, 258Defects per thousand sources lines of code
(KLOC), 66Definitions, 58Delayed S model, 215–216Delinquent fixes, 138–139Department of Defense (DoD), 365, 415Deployment, 448Depth of Inheritance Tree (DIT), 338Design
high-level design (HLD), 16, 258low-level design (LLD), 16, 258
Design phase, OO model, 28Design review/code inspection (DR/CI), 238,
255Developers, data provided by, 475Development. See also Object-oriented
developmentenvironment for OO projects, 352–353iterative model. See Iterative development
processmeasurement and, 484methodologies, 413practices for OO projects and, 355Rayleigh model and, 480–481SPC and, 481–483
Development cycleearly indicators, 400quality assessment and, 399
Development phasesintegration phase, 56multiple models for evaluating, 257quality indicators by, 401
Discrepancy reports (DR), 161–162Distribution of defects, 295–296DIT. See Depth of Inheritance Tree“Do it right the first time” principle, 192, 236DO WHILE statements, 319Documentation
customer satisfaction and, 384defects, 122–123process improvement and, 446–447
DoD. See Department of DefenseDowntime
availability metrics and, 360MTTR and, 363
DPP. See Defect prevention process (DPP)DPPM. See Defective parts per millionDR/CI (Design review/code inspection), 238,
255DR. See Discrepancy reports
DRE. See Defect removal effectivenessDRM. See Defect removal modelDynamic reliability models, 187–188
EEarly customer programs (ECP), 18–19,
274–275, 305Early defect removal
code integration pattern and, 245model evaluation and, 257phase-based defect removal, 103Rayleigh model and, 236
Early detection percentage, 161–162ECP. See Early customer programsEducation, OO projects, 351–352Effort/defect matrix, 242, 259, 261Effort indicators
difficulty of establishing, 301effort/outcome model and, 298S curve and, 299
Effort/outcome model, 298–302effort indicators and outcome indicators,
298example scenarios, 299–300improvement actions, 301matrix structure of, 299quality management and, 241, 402test coverage and scoring, 301–302
Engineering. See Software engineeringEntry-Task Validation-Exit (ETVX) paradigm,
waterfall model, 14Error detection efficiency. See Defect removal
effectivenessErrors
defined, 86detection rate, 208injection rate, 239–240listing, 131measurement errors, 73–74
ETVX. See Entry-Task Validation-Exitparadigm, waterfall model
European Foundation for QualityManagement, 47
European Quality Award, 47Evaluation phase, quality assessment, 402–405
evaluation criteria, 405qualitative data, 403–405quantitative data, 402–403
Evolutionary prototyping, 21Execution-time data, 210Executive leadership, 8
Expert opinions, 403, 405Exponential model, 208–211
as case of Weibull family, 208ease of implementation, 209predictive validity, 211reliability and survival studies and, 208software reliability and, 208–209testing and, 209–210
Extreme Programming (XP ), 31–32Extreme-value distributions, 189
FFace-to-face interactions, 428Face-to-face interviews, 376Fact gathering
phase 1—project assessment, 422–423phase 2—project assessment, 425–426
Fagan’s inspection model, 180Failures
defined, 86factors in project failure, 426–427instantaneous failure rate, 208relationship to defects, 87
Fan-in/fan-out metrics (Yourdon andConstantine), 319–320
Fault containment. See Defect removaleffectiveness
Fault count modelsassumptions, 218reliability growth models and, 212
Faults, 86Field defects
arrival data, 228–229compared with tested defects, 236–237
Fishbone diagrams, 130, 152–153. See alsoCause-and-effect diagrams
“Fitness for use,” 2“Five 9’s” availability, 360–361Fix backlog metric, 106–107Fix quality, 109–110Fix response time metric, 107–108Flow control loops, 316–317, 319–320FPs (Function points), 56. See also Function
point metricsFrequency bars. See Pareto diagramsFrequency of outages, 360Front end of development process, 56–57,
269Function point metrics, 93–96
background and overview of, 456compared with LOC counts, 93–94
514 Index
Index 515
example defect rates, 96issues with, 95measuring process improvements at
activity levels, 462–466OO metrics and, 334opportunities for error (OFE) and, 93
Function points. See FPFunctional defects, 267Functional verification test (FVT), 422–423Functions, 93FURPS (Hewlett-Packard customer
satisfaction), 4, 98
GGoal/Questions/Metric paradigm (Basili and
Weiss), 110, 472Goel generalized nonhomogeneous Poisson
process model, 214Goel-Okumoto (GO) Imperfect Debugging
model, 213, 217Goel-Okumoto Nonhomogeneous Poisson
Process (NHPP) model, 213–215Goodness-of-fit test (Kolmogorov-Smirnov),
222GQM. See Goal/Questions/Metric paradigm
(Basili and Weiss)Graphic methods, modeling and, 253
HHalstead’s software science, 314–315
formulas for, 314–315impact on software measurement, 314–315primitive measures of, 314
Hangseffort/outcome model and, 301software testing and, 289–291
Hazard rate, 208Hewlett-Packard
defects by category, 134fix responsiveness, 108FURPS customer satisfaction, 98software metrics programs, 115–116TQC program, 7
High availability. See also Availability metrics
defect rates and, 365importance of, 359reliability and, 362
High-level design (HLD), 16, 258Histograms
defined, 129
examples of, 136–138illustration of, 129as quantitative analysis tools, 402
HLD. See High-level designHuman cultural aspects, TQM, 8Hypotheses
formulating and testing, 57statistical methods and, 483
II Know It When I See It (Guaspari), 3IBM
eServer iSeries, 273Market Driven Quality, 8Object Oriented Technology Council
(OOTC), 336OS/2 and IDP, 26–27programming architecture, 14Sydney Olympics (2000) project,
302–303IBM Federal Systems Division, 191, 201IBM Houston, 161IBM Owego, 25IBM Rochester
OO projects of, 336–337software metrics programs, 116–117,
273–276IBM Rochester software testing, 273–276
assigning scores to test cases, 275defect tracking, 279process overview, 273reliability metric, 291showstopper parameter (critical problems),
293system stress tests, 286–287testing phases, 274
IDP. See Iterative development processIE. See Inspection effectivenessIF-THEN-ELSE statements, 316, 318–319IFPUG. See International Function Point
Users GroupImplementation phase, OO model, 28Improvement actions. See also Process
improvementmodule design and, 326–328by project, 432–433project assessment and, 427–428software testing and, 295–297
In-process availability metrics, 372–373In-process quality assessment. See Quality
assessment
In-process quality metrics, 100–105defect arrivals, 101–102defect density, 100–101defect rate by phase, 265–266defect removal effectiveness, 103–105effort/outcome model and, 241inspection reports, 258–263inspection scoring questionnaire, 261overview of, 100phase-based defect removal, 103Rayleigh model and, 237, 258–260unit test coverage and defect report,
264–265In-process software testing metrics, 271–309
acceptance testing (vendor-developedsoftware), 302–304
CPU utilization, 286–289defect arrivals, 279–282defect backlog, 283–284effort/outcome model and, 298–302implementation recommendations, 294–295improvement actions and, 295–297mean time to unplanned IPLs, 291–293product preparedness for shipping, 304–307product size over time, 285quality management and, 297–298references, 309showstopper parameter (critical problems),
293summary, 308–309system crashes and hangs, 289–291test progress S curve, 272–279
Index of variation, 71Industry leadership factors, 458–459Inflection S models, 215–216Information flow metric, 320–321Infrastructure, process improvement stages,
457–458Inheritance, OO technology, 332Inheritance tree depth, 334Initial program loads (IPLs), 289–293Injection rate, errors, 239–240Inspection checklist, 238Inspection defects, 119, 317Inspection effectiveness (IE), 148–149, 167,
240–241Inspection effort, 240–242, 258–262, 269,
481Inspection model (Fagan), 180Inspection reports, 258–263
defect origin and type, 263
effort and defect rates, 260effort/defect matrix, 259
Inspection scoring checklist, 269Inspection scoring questionnaire, 261–262Instance variables, OO technology, 332Instantaneous failure rate, 208Instruction statements. See Lines of code
(LOC)Integrated product and process development
(IPPD), 42Integration phase, development process,
56Interface defects, 119, 262, 312Intermodule complexity, 321–322Internal assessments, 421–422Internal consistency method, 75International Function Point Users Group
(IFPUG), 95, 456, 462International Organization for Standardization
(ISO), 47, 414International Software Quality Exchange
(ISQE), 471Interval scale, 60–61Interviews, qualitative data and, 400, 403Intramodule complexity, 321–322IPL tracking tools, 290IPLs. See Initial program loadsIPPD. See Integrated product and process
developmentIshikawa’s seven basic tools. See Quality toolsISO 9000, 47–51
audits, 49changes to, 51compared with Malcolm Baldrige
Assessment, 50document control requirements of, 48elements of, 47–48software metrics requirements, 49
ISO. See International Organization forStandardization
ISQE. See International Software QualityExchange
Iterative development process (IDP), 24–27IBM OS/2 development as example of,
26–27illustration of, 25overview of, 24–25steps in, 26
Iterative enhancement (IE). See Iterativedevelopment process
Iterative model, 24–27
516 Index
Index 517
JJelinski-Moranda (J-M) model, 212–213,
216–217Joint application design (JAD), 457Juran, J. M., 2, 5
KKey process areas (KPAs), 40KLOC (defects per thousand sources lines
of code), 66Kolmogorov-Smirnov goodness-of-fit test,
222
LLack of Cohesion on Methods (LCOM), 338Languages. See Programming languagesLeadership
project teams and, 428TQM and, 8
Lean Enterprise Management, 10Least-squares
correlation, 78PTR arrival and backlog model and, 252
Library of generalized components (LGC),30
Library of potentially reusable components(LPRC), 30
Life cycle, software, 191Life of product (LOP), 88Likert scale, 56–57Lines of code (LOC)
complexity metrics and, 312correlation with cyclomatic complexity
(McCabe), 316–318defects relative to, 88–92defined, 88example defect rates, 91–92function points and, 93–94, 456measurement theory and, 56OO metrics and, 334tracking over time, 285variations in the use of, 88–89
Literature reviewdefect removal effectiveness (DRE),
160–164project assessment, 426–427
Littlewood (LW) models, 213, 217Littlewood nonhomogeneous Poisson process
(LNHPP) model, 213LLD. See Low-level designLOC. See Lines of code
Logic defects, 121Lonenz metrics and rules of thumb, 334–336Loops, 316–319LOP. See Life of productLow-level design (LLD), 16, 258LPRC. See Library of potentially reusable
componentsLW. See Littlewood models
MM-O (Musa-Okumoto) logarithmic Poisson
execution time model, 215Machine testing, 100–101Mailed questionnaire, 376, 394Malcolm Baldrige assessment, 45–47
assessment categories, 45award criteria, 46ISO 9000 and, 49–50purpose of, 46used by US and European companies, 46–47value of feedback from, 46
Malcolm Baldrige National Quality Award(MBNQA), 7, 45
Management, projects. See Projectmanagement
Management, quality. See Qualitymanagement models; Total qualitymanagement
Management technologies, 456–457Managerial variables, CK metrics, 340–341Manpower buildup index (MBI), 200Margin of error, sample size, 379–380Mark II function point, 95, 462Market Driven Quality, IBM, 8Market share, customer satisfaction and,
390–391Mathematical operations
interval scale, 60ratio scale, 61
Maturity. See also Capability Maturity Model(CMM); Capability Maturity ModelIntegration (CMMI); Process maturity
project assessment and, 416software evaluation based on, 415
MBI. See Manpower buildup indexMBNQA. See Malcolm Baldrige National
Quality AwardMcCabe’s complexity index. See Cyclomatic
complexityMean time between failure (MTBF). See
Mean time to failure (MTTF)
Mean time to failure (MTTF)cleanroom methodology and, 34defect rates and, 364as quality metric, 86–87as reliability metric, 291, 362
Mean time to repair (MTTR), 363, 366Mean time to unplanned IPLs (MTI),
291–293example of, 292–293formula for, 291–292used by IBM Rochester as reliability
measure, 291Means, 323Measurement. See also Metrics
data quality control and, 470–472errors, 73–74future of, 484references, 485software quality and, 472–475SPC and, 481–483state of art in, 484TQM and, 8
Measurement theory, 55–83attenuation, 76–77causality, 80–82concepts and definitions and, 58correlation, 77–80errors, 73–74example application of, 56–58interval and ratio scales, 60–61nominal scale, 59operational definitions, 58–59ordinal scale, 59percentages, 63–65proportions, 62–63rates, 65–66ratios, 62references, 83reliability, 70–73, 75–76scientific method and, 55–56six sigma and, 66–70summary, 82–83validity, 71–72
Messages, OO technology, 332Methods, OO technology, 332Metrics. See also Measurement
availability. See Availability metricsback end quality and, 57Capability Maturity Model (CMM), 41CK. See CK metrics suitecomplexity. See Complexity metrics
correlation and, 76function points. See Function point metricsmodule design. See Module design metricsOO projects and, 348in process quality. See In-process quality
metricsproduct quality. See Product quality
metricsproductivity. See Productivity metricssoftware maintenance. See Software
maintenance metricssoftware programs and, 472–475software quality. See Software quality
metricssoftware testing. See In-process software
testing metricsstructural complexity. See Structural
complexity metricsteam commitment and, 297
Metrics programsHewlett-Packard, 115–116IBM Rochester, 116–117Motorola, 110–115
Middle of development process, codeintegration pattern, 269
Models, 475–481. See also by individual type
criteria for evaluating, 479deficiencies in application of, 477–478development process and, 480empirical validity and, 479list of, 475–476neural network computing technology
and, 478–479probability assumptions and, 478professional’s use of, 476–477steps in modeling process, 220–224types of, 476
Module design metrics, 322–328. See alsoComplexity metrics
correlation between defect level and other metrics, 323–324
defect levels, 324–325defect rate, 325–326identifying improvement actions, 326–327list of metrics used, 323means and standard deviations of variables,
324measuring defect level among program
modules, 323–324Modules. See Program modules
518 Index
Index 519
MotorolaQuality Policy for Software Development
(QPSD), 110–114Six Sigma Strategy, 7–8, 66–67Software Engineering Process Group
(SEPG), 110–114software metrics programs, 110–115
MTBF (mean time between failure). See Meantime to failure
MTI. See Mean time to unplanned IPLsMTTF. See Mean time to failureMTTR. See Mean time to repairMulti-inspector phase, 180Multiple regression model, 326–327Multivariate methods, 402Musa-Okumoto (M-O) logarithmic Poisson
execution time model, 215
NNASA, Software Assurance Technology
Center (SATC), 342NASA Software Engineering Laboratory, 118NEC Switching Systems Division, 103Net satisfaction index (NSI), 99–100Neural network computing technology,
478–479NOC. See Number of Children of a ClassNominal scale, 59Nonhomogeneous Poisson Process (NHPP)
model, 213–215Nonlinear regression, 195NSI. See Net satisfaction indexNull hypotheses, 483Number of Children of a Class (NOC), 338
OObject-oriented analysis (OOA), 331Object-oriented design (OOD), 331Object-oriented (OO) development, 27–32
Branson and Hernes methodology for, 26–29design and code reuse and, 29–30Extreme Programming (XP ) and, 31–32overview of, 27phases of, 27–29steps in, 28–29Unified Software Development Process and,
30–31Object-oriented programming (OOP), 331Object-oriented projects, 331–358
CK metrics, 337–343education and skills required, 351–352
examples of, 336–337Lonenz metrics and rules of thumb,
334–336metrics applicable to, 348OO concepts and constructs, 331–334OO metrics, 342–343performance and, 355productivity and, 343–347project management for, 353–354quality and development practices and,
355quality management, 347–351references, 357reusable classes and, 354–355summary, 356–357tools and development environment for,
352–353Object Oriented Technology Council (OOTC),
336Objects, OO technology, 332ODC. See Orthogonal defect classificationOFE. See Opportunities for errorOO development. See Object-oriented
developmentOOA. See Object-oriented analysisOOD. See Object-oriented designOOP. See Object-oriented programmingOOTC. See Object Oriented Technology
CouncilOperational definitions, 58–59Opportunities for error (OFE), 87, 93Ordinal scale, 59Organizational-level
assessment, 416management commitment at, 474
Organizations, 416Orthogonal defect classification (ODC),
266–269based on defect cause analysis, 266defect attributes, 268defect discovery periods, 267defect triggers, 267–268defect types and, 267
OS/2, 26Outage duration. See Mean time to repairOutages
analyzing causes, 369–370data collection methods, 369, 371–372duration of, 360frequency of, 360incidents and downtime, 370
Outages (continued)indicators derived from outage data, 366tracking forms for, 367–368
Outcome indicators. See also Effort/outcomemodel
commonly available, 301defect arrival patterns and, 299effort/outcome model and, 298
Outlierscorrelation and, 80PTR arrival and backlog model, 250–251
Overall satisfaction, 382–388
Pp charts, 145Pareto diagrams, 133–136
defects by component, 136defined, 128examples of, 134–135illustration of, 129quantitative analysis tools, 402return on investment and, 133
PCE. See Phase containment effectivenessPDCE. See Phase defect containment
effectivenessPDF. See Probability density functionPDPC. See Process decision program chartPearson correlation coefficient
assumption of linearity, 78complexity metrics and, 323Rayleigh model and, 194
Peer reviewsprocess compliance and, 449project assessment, 423software development and, 414
Percent delinquent fixes, 108–109, 138–139Percentages, 63–65performance, OO projects, 355Personal Software Process (PSP), 42Phase-based defect removal model (DRM).
See Defect removal modelPhase containment effectiveness (PCE),
163–164, 169Phase defect containment effectiveness
(PDCE), 170Phase effectiveness, 104–405Plan-Do-Check-Act, 8–9Platform availability. See Availability metricsPoisson distribution, 145Polynomial model, 250, 476Precision. See Validity
Predictive validity, 203, 218Preparation phase, project assessment,
421–422, 425–426Preparation phase, quality assessment
qualitative data, 400–402quantitative data, 399–400
Probability assumption, 478Probability density function (PDF), 189–190,
208Probability sampling methods, 377–379
cluster sampling, 379simple random sampling, 377–378stratified sampling, 378–379systematic sampling, 377–378
Problem tracking reports (PTRs), 221,279–282. See also PTR arrival andbacklog model; PTR submodel
Problems per user month (PUM), 96–97Process adoption, 448–449Process assessment, 455–456Process capability
levels of, 440measuring, 440
Process compliance, 449–450Process decision program chart (PDPC), 154Process improvement, 453–467
activity levels and, 462–466alignment principle, 443–444CMMI staged vs. continuous
representations, 440–441day by day, 450–451defect rates and, 461economics of, 459–461maturity assessment models, 438–440measuring, 447–448process adoption, 448–449process capability levels, 440process compliance, 449–450process documentation, 446–447process maturity levels, 438productivity and, 461references, 452, 467schedules and, 461six stage program, 454–459summary, 451, 466–467teams, 441–443time to market goals and, 444–445TQM and, 8
Process maturity, 437–452. See alsoCapability Maturity Model (CMM)
assessment models, 438–440
520 Index
Index 521
compared with project assessment, 415–417framework and quality standards, 39levels of, 438measuring, 438–440
Processes, process improvement stages and,457
Product levelvs. module level, 311reliability, 188
Product qualityreliability and defect rate as measure of,
364software quality and, 3–4vs. total quality management, 375
Product quality metrics, 86–100customer problems metric, 96–98customer satisfaction metric, 98–100customer’s perspective, 92–93defect density metric, 87–88defect density vs. mean time to failure,
86–87function points and, 93–96intrinsic product quality and customer
satisfaction, 86lines of code (LOC) metric, 88–92
Product size over time, 285Productivity metrics, 343–347
availability metrics and, 362examples, 343–345limitations of, 345person-days per class, 343–344process improvement and, 461two-dimensional vs. three- or four-
dimensional measures, 346–347units of measurement for, 343
Products, good enough to ship, 304–307indicators or metrics for, 304–305negative metrics and, 305product type and, 304quality assessment summary, 306–307
Program length formula, 82Program modules. See also Module design
metricscomplexity metrics and, 311module design examples, 322–328
Program size, 312Program temporary fix (PTF) checklist,
131–133Programming languages
formal specification languages, 20fourth generation languages, 21
UML (Unified Modeling Language), 30,457
variations in LOC, 89Programming Productivity (Jones), 88–89Project assessment. See Software project
assessmentProject charters, 421–422Project closeout plans, 422Project level, vs. module level, 311Project management
improvement actions and, 427in-process quality assessment and,
399OO projects, 353–354
Proportions, 62–63Prototyping
OO development and, 354risk-driven approach and, 23
Prototyping model, 19–21evolutionary prototyping, 21factors in quick turnaround of prototypes,
20–21overview of, 19rapid throwaway prototyping, 21steps in, 20
Pseudo-control charts. See Control chartsPSP. See Personal Software ProcessPTF (program temporary fix) checklist,
131–133PTR arrival and backlog model, 249–253
applications of, 249–250backlogs over time, 283–284component test (CT) phase and, 295contrasted with exponential model, 250defect arrivals over time, 279–282outliers and, 251–252predictability of arrivals with, 252–253predictor variables, 250
PTR submodel, 245–249deriving model curve, 247–248not applicable for projection, 248testing defect tracking, 246using in conjunction with other models, 249variables of, 246
PTRs. See Problem tracking reportsPUM. See Problems per user month
QQIP. See Quality improvement programQPSD. See Quality Policy for Software
Development
Qualitative data, 400, 400–405, 402–403,405, 447
Quality. See also Software qualityambiguity regarding, 1of assumptions, 219audits, 397of conformance, 3customer satisfaction as validation of,
375of design, 3expense and, 2formula (conformance to customers’
requirements), 4, 5improvement strategies, 238in-process assessment. See In-process
quality assessmentmetrics and, 297–298OO projects and, 355popular view of, 1–2professional view of, 2–3projections with Rayleigh model, 237six sigma as measure of, 66–70
Quality assessment, 397–411assessment ratings over time, 408evaluation phase, 402–405overview of, 397–399parameters of, 406preparation phase, 399–402products, good enough to ship, 306–307recommendation phase, 408–410references, 411scale, 407summarization phase, 406–408summary, 410
Quality Improvement Paradigm/ExperienceFactory Organization, 9
Quality improvement program (QIP), 254–255Quality management models, 235–270
code integration pattern, 242–245compared with complexity metrics, 311evaluating, 257in-process metrics and reports, 258–266OO projects and, 347–351orthogonal defect classification, 266–269PTR arrival and backlog model, 249–253PTR submodel, 245–249Rayleigh model, 236–242references, 270reliability growth models, 254–257summary, 270
Quality metrics. See In-process quality metrics
Quality of measurementattenuation, 76–77errors, 73–74reliability, 70–73, 75–76validity, 71–72
Quality parameters, 5Quality Policy for Software Development
(QPSD), 110–114Quality tools, 127–158
cause-and-effect diagrams, 152–153checklists, 130–133control charts, 143–152histograms, 136–138new, 154overview of, 127–130Pareto analysis, 133–136references, 158relations diagrams, 154–156running charts, 138–140scatter diagrams, 140–143summary, 156–157
Quantitative analysis tools, 402Quantitative data
quality assessment evaluation phase,402–403
quality assessment preparation phase,399–400
QuestionnairesCMM maturity questionnaire, 424customer feedback and, 376–377project assessment and, 423–425, 487–508
RR charts, 145RAISE system tests, IBM Rochester, 286–288Random errors, 73, 75Random sampling, 377–378Rank-order correlation, 78Rapid throwaway prototyping, 21Rates, 65–66Ratio scale, 60–61Ratios, 62Rayleigh model, 189–206
assumptions, 192–195development process and, 480–481illustration of, 192implementation of, 195–203in-process quality metrics and, 258–260references, 206reliability and validity and, 203–205software life cycle and, 191
522 Index
Index 523
summary, 205underestimation and, 204Weibull distribution and, 189–190
Rayleigh model, quality management, 236–242defect prevention and early removal, 236error injection rate and, 239–240inspection process and, 240–242organization size and, 269quality improvement strategies and, 238quality projections, 237relationship between testing and field defect
rates, 236–237Real-time delinquency index, 109Reboots. See Initial program loadsRecommendation phase, quality assessment
recommendations, 408–409risk mitigation and, 409–410
Recommendations, software projectassessment, 428–429
Referencesavailability metrics, 373–374complexity metrics, 330customer satisfaction, 393–394defect removal effectiveness, 184–185in-process quality assessment, 411in-process software testing, 309measurement, 485measurement theory, 83object-oriented projects, 357process improvement, 452, 467quality management, 270quality tools, 158Rayleigh model, 206reliability growth models, 231–233software development process models,
52–54software project assessment, 435software quality, 11–12
Relations diagrams, 154–156Relative cost, 118, 183Reliability, 70–76. See also Rayleigh model
assessing, 75–76availability and, 362–364defined, 70–71measurement errors and, 73–74measurement quality and, 73operational definition of, 362random errors and, 75technologies for improving, 363–364validity and, 72, 203–205
Reliability growth models, 207–233
assumptions, 216–218compression factor, 224–226defects over time, 226–229evaluating, 218–219exponential model and, 208–211Goel-Okumoto Imperfect Debugging
model, 213Goel-Okumoto Nonhomogeneous Poisson
Process (NHPP) model, 213–215Jelinski-Moranda (J-M) model, 212–213Littlewood models, 213modeling process, 220–224Musa-Okumoto logarithmic Poisson
execution time model, 215overview of, 207, 211–212references, 231–233S models, 215–216summary, 229–231
Reliability growth models, qualitymanagement, 254–257
advantages and applications of, 256defect arrival patterns, 254quality improvement program, 254–255
Reliability models. See also Rayleigh modelcompared with complexity metrics, 311deficiencies in application of, 477–478for small organizations, 230–231static and dynamic, 187–188
Removal efficiency, 96, 160–161, 402, 443,465
Remus-Zilles, 174, 177–178Reports, assessment, 429–433Reports, in-process
defect rate by phase, 265–266inspection, 258–263inspection scoring questionnaire and, 261severity distribution of test defects, 266test defect origin, 266unit test defects, 264–265unit test defects by phase, 265–266
Response for a Class (RFC), 338Response time, software maintenance, 107–108Results, software project assessment, 428–429Return on investment (ROI), 133Reuse
analyzing with scatter diagrams, 141–143design and code, 29–30list of reusable artifacts, 458OO projects and, 354–355reusable software parts, 20
RFC. See Response for a Class
Rigorous implementation, 56Risk analysis, spiral development model and, 23Risk exposure rates, 65Risk mitigation, 409–410Rules of thumb (Lorenz), 334–336Run charts, 138–140
defined, 130illustration of, 129percent of delinquent fixes, 138–139tracking cumulative parameters, 140uses of, 138
SS curve, 140, 272–279, 299S models, 145, 215–216Sample size, customer surveys, 379–381
absolute size vs. relative size, 380–381margin of error and, 379–380
Sampling methods, 377–379cluster sampling, 379simple random sampling, 377–378stratified sampling, 378–379Systematic sampling, 377–378
SATC. See Software Assurance TechnologyCenter
Satisfaction scale, 381, 383. See also Customersatisfaction
Scales of measurementhierarchical nature of, 61interval, 60–61nominal, 59ordinal, 59
SCAMPI. See Standard CMMI AppraisalMethod for Process Improvement
Scatter diagrams, 140–143analyzing nonlinear relationships, 78defined, 129–130illustration of, 129McCabe’s complexity index and, 140–141,
324–325as quantitative analysis tools, 402relationship of reuse to defects, 141–143uses of, 140
SCE. See Software Capability EvaluationSchedule performance index (SPI), 152Scheduled uptime, 360Schedules, process improvement and, 461Scientific method, 55–56Scope creep, 285Scope of project assessment, 413, 415–416SEI. See Software Engineering Institute
SELECT statements, 319SEPG. See Software Engineering Process
GroupServer availability, 360–361Service calls, outage data and, 369SETT. See Software Error Tracking ToolSeven basic quality tools. See Quality toolsSeverity distribution of test defects report, 266Shipped source instructions (SSI), 91Showstopper parameter (critical problems), 293Significance tests, 483Simple random sampling, 377–378Simplicity, reliability models and, 219Single-inspector phase, 180Six sigma, 66–70
centered vs. shifted, 69implications for process improvement and
variation reduction, 69–70as industry standard, 66Motorola, 7–8standard deviation and, 66–67
Six stage program, process improvementoverview, 454Stage 0: Software Process Assessment and
Baseline, 455–456Stage 1: Focus on Management
Technologies, 456–457Stage 2: Focus on Software Processes and
the Methodologies, 457Stage 3: Focus on New Tools and
Approaches, 457Stage 4: Focus on Infrastructure and
Specialization, 457–458Stage 5: Focus on Reusability, 458Stage 6: Focus on Industry Leadership,
458–459Skills
metrics expertise, 474–475OO projects, 351–352skill-building incentives, 446–447
SLIM. See Software Life-cycle Model toolSmall organizations
customer satisfaction and, 392–393defect removal effectiveness and, 182–183object-oriented metrics and, 356quality management for, 269reliability modeling for, 230–231software testing recommendations, 308
Small teamscomplexity metrics and, 327–328OO projects and, 356
524 Index
Index 525
Softwarecomplexity. See Complexity metricscontrol charts and, 149development models. See Software
development process modelsgetting started with metrics program for,
472–475life cycle, 191maintenance metrics. See Software
maintenance metricsmeasurement. See Measurementprocess improvement. See Process maturityproductivity. See Productivity metricsproject assessment. See Software project
assessmentquality management. See Quality
management modelsquality metrics. See Software quality
metrics; Software quality metricsquality models. See Modelsreliability. See Reliabilitystress tests, 286–287testing metrics. See In-process software
testing metricsSoftware Assessments, Benchmarks, and Best
Practices (Jones), 96Software Assurance Technology Center
(SATC), 342Software Capability Evaluation (SCE), 438Software development process models,
13–54cleanroom methodology, 32–35defect prevention process, 35–39ISO 9000, 47–51iterative model, 24–27Malcolm Baldrige assessment, 45–47object-oriented approach, 27–32process maturity framework and quality
standards, 39prototyping approach, 19–21references, 52–54SEI maturity model, 39–44spiral model, 21–24SPR assessment method, 44–45summary, 51–52waterfall model, 14–19
Software engineeringCMMI model, 440software development and, 470software quality and, 475–481techniques and tools, 484
Software engineering data, 117–123challenge of collecting, 117collection methodology, 118defect data and, 119–123expense of collecting, 118
Software Engineering Institute (SEI)assessment vs. capability evaluations, 415CBA IPI, 417–418CMM, 9, 39–44DPP and, 39lack of solid data on process improvements,
453rate of software flaws, 365SCAMPI assessors, 439
Software Engineering Metrics and Models(Conte), 88
Software Engineering Process Group (SEPG)alignment principle, 443–444capability baseline, 450measuring process improvement, 447–448monitoring process adoption, 448–449Motorola, 114–115not focusing only on maturity level,
442–443as process improvement team, 441reliable goals and, 445
Software Error Estimation Reporter (STEER),201–203
Software Error Tracking Tool (SETT), 222Software Life-cycle Model (SLIM) tool, 200Software maintenance metrics, 105–110
fix backlog and backlog management index,106–107
fix quality, 109–110fix response time and responsiveness,
107–108overview of, 105percent delinquent fixes, 108–109
Software Productivity Research, Inc. (SPR),44–45, 454. See also SPR assessmentmethod
Software project assessment, 413–436activities by phases of, 433–434assessment report, 429–433audits vs. assessments, 414–415characteristics of approach to, 420–421CMM-based method, 417–418facts gathering phase 1, 422–423facts gathering phase 2, 425–426improvements for, 426–428preparation phase, 421–422
Software project assessment (continued)project assessment vs. process maturity
assessment, 415–417questionnaire for, 423–425references, 435results and recommendations, 428–429scope of, 413SPR method, 419–420summary, 434–435summary report, 433
Software quality, 1–12customer’s role in, 3–4overview of, 4–7popular view of, 1–2professional view of, 2–3references, 11–12summary, 10TQM and, 7–10
Software Quality and Productivity Analysis(SQPA), 7
Software Quality Assurance (SQA) group,449
Software quality engineering, 475–481Software quality metrics
data collection, 117–123Hewlett-Packard program, 115–116IBM Rochester program, 116–117Motorola program, 110–114overview of, 85–86in process quality. See In-process quality
metricsproduct quality. See Product quality metricsreferences, 125–126software maintenance. See Software
maintenance metricssummary, 123–125
Space Shuttle software project, 161, 194–195,238
SPC. See Statistical process controlSpearman’s rank-order correlation, 78, 194SPI. See Schedule performance indexSpiral development model, 21–24
advantages/disadvantages of, 23–24illustration of, 22risk analysis and risk-driven basis of, 23TRW Software Productivity System, 21
Split-halves method, 75SPR assessment method, 44–45
assessment themes, 45five-point scale of, 44process maturity and, 416
questionnaire topics, 44–45steps in, 419–420
SPR. See Software Productivity Research, Inc.Spurious relationships, causality, 81SQA. See Software Quality Assurance groupSQC. See Statistical quality controlSQPA. See Software Quality and Productivity
AnalysisSSI. See Shipped source instructionsStaged representation (software engineering),
CMMI, 42, 44, 440–441Standard CMMI Appraisal Method for
Process Improvement (SCAMPI), 415,418, 438–439
Standard deviationmodule design metrics and, 324sample size and, 380–381six sigma measures and, 66–70
Standards, ISO 9000, 47–51Static measures vs. dynamic, 65Static reliability models, 187–188Statistical methods
null hypotheses and alternative hypothesesand, 483
software development and, 482Statistical process control (SPC), 143–144,
481–483Statistical quality control (SQC), 481–483STEER. See Software Error Estimation
ReporterStratified sampling, 378–379Stress testing, 286, 301Structural complexity metrics, 319–322
defined (Henry and Kafura), 319–320fan-in/fan-out metrics (Yourdon and
Constantine), 319–320interaction between modules and,
320system complexity model (Card and Glass),
320–322Subtraction, interval scale and, 60Success of projects, factors in, 426–427Summarization phase, quality assessment
overall assessment, 406–408strategy for, 406
Summary report, project assessment, 433Supplier quality, 390Surveys, customer satisfaction, 98, 371,
376–381Survival studies, exponential model and, 208SVT. See System verification test
526 Index
Index 527
Sydney Olympics (2000), acceptance testingand, 302
Syntactic constructs, complexity metrics,318–319
System availability. See Availability metricsSystem complexity model (Card and Glass),
320–322System crashes and hangs
effort/outcome model and, 301software testing and, 289–291
System testsCPU utilization and, 286–289quality improvement program and, 255
System verification test (SVT), 422–423Systematic errors, 73, 75Systematic sampling, 377–378Systems engineering, CMMI model, 440
TTDCE. See Total defect containment
effectivenessTeam commitment, project success and,
297Team Software Process (TSP), 42Teams
complexity metrics and, 327–328process improvement and, 441–443project assessment and, 427–429quality management models for, 474
Telephone interviews, 376Test case, metrics, 303–304Test defect origin report, 266Test execution, metrics, 303–304Test plan curve, 278–279Test progress S curve
assigning scores to test cases and, 275coverage weighting, 276execution plan and, 278illustration of, 275plan curve and, 278–279purpose of, 272tracking, 276–278
Test/retest method, 75, 76Tests
assigning scores to test cases, 275component, 17defect removal and, 160defect tracking and, 246, 269determining end date for, 256effort/outcome model and, 298functional verification test (FVT), 422–423
Kolmogorov-Smirnov goodness-of-fit test,222
phases, 295for significance, 483system verification test (SVT), 422–423tested defects vs. field defects, 236–237unit tests (UT), 16–17, 243, 264–266
Time between failures modelsassumptions, 217–218Jelinski-Moranda (J-M) model, 212–213reliability growth models and, 212reliability models and, 197
Time boxing, 21Time to market goals, 444–445Tools. See also Quality tools
OO projects, 352–353process improvement stages and, 457
Total defect containment effectiveness(TDCE), 163
Total Quality Control (TQC), 7Total quality management (TQM)
background of, 7customer satisfaction and, 375Hewlett-Packard’s program, 7IBM’s program, 8key elements of, 8, 392Motorola’s program, 7–8organizational frameworks for, 8–10
TQC. See Total Quality ControlTQM. See Total quality managementTracking defects. See also problem tracking
reports (PTRs)IBM Rochester software testing, 279OO quality and, 347–348PTR submodel and, 246testing and, 269
Tracking outage data, 367–368Training materials, 447Trend charts, 402Trillum model, 416TRW Software Productivity System (TRW-
SPS), 21–22TSP. See Team Software Process
Uu charts, 145–146Unified Modeling Language (UML), 30, 457Unified Software Development Process, 30–31Unit tests (UT)
code development and, 243coverage and defect report, 264–265
Unit tests (UT) (continued)defects by test phase report, 265–266waterfall model, 16–17
Unplanned IPLs. See Initial program loadsUPRIMD (usability, performance, reliability,
installability, maintainability, docu-mentation, and availability) categories,383–388
Use-case model, UML, 30UT. See Unit tests
VValidation of data, 118Validity, 70–73
defined, 71measurement quality and, 73predictive vs. empirical, 203–204reliability and, 72systematic errors and, 75types of, 71–72
Variation, index of, 71Vendors. See also Acceptance testing (vendor-
developed software)Video conferencing, 428Vienna Development Method (VDM), 34
WWaterfall development model, 14–19
advantages of, 14
coding stage, 16component tests, 17early customer programs (ECP),
18–19Entry-Task-Validation-Exit (ETVX),
14example implementation, 14–15high-level design, 16low-level design, 16system-level test, 17–18unit tests, 16–17
Weibull modelsexponential model, 208–211Rayleigh model, 189–190
Weighted Methods per Class (WMC),337–338
Where found, 169–171, 182–183Workload characteristics, 286–287
XX-bar charts, 145XP (Extreme Programming), 31–32
ZZ notation, 34Zero point, ratio scale and, 61
528 Index
Solutions from experts
you knowand trust.
www.informit.com
www.informit.com
OPERATING SYSTEMS
WEB DEVELOPMENT
PROGRAMMING
NETWORKING
CERTIFICATION
AND MORE…
Expert Access.Free Content.
Free, indepth articles and supplements
Master the skills you need,when you need them
Choose from industry leadingbooks, ebooks, and trainingproducts
Achieve industry certificationand advance your career
Get answers when you need them from live experts or InformIT’s comprehensive library
Visit
and get great content
from
Articles Books Expert Q&AFree Library Training News Downloads
Addison-Wesley and InformIT are trademarks of Pearson plc /Copyright©2000 pearson
AW 7.375x9.25 4/25/01 11:30 AM Page 1
If you are interested in writing a book or reviewing manuscripts prior to publication, please write to us at:
Editorial Department Addison-Wesley Professional75 Arlington Street, Suite 300Boston, MA 02116 USAEmail: [email protected]
Visit us on the Web: http://www.awprofessional.com
You may be eligible to receive:
• Advance notice of forthcoming editions of the book
• Related book recommendations
• Chapter excerpts and supplements of forthcoming titles
• Information about special contests and promotions
throughout the year
• Notices and reminders about author appearances,
tradeshows, and online chats with special guests
at www.awprofessional.com/register