computerized systems validation

12
46 Computerized Systems Validation Saeed Tafreshi Intelitec Corporation, Irvine, California, U.S.A. The concept of validation was developed in the 1970s and is widely credited to Ted Byers who was then Associate Director of Compliance at the U.S. FDA. The concept was focused on: Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predeter- mined specifications and quality attributes. This concept continues to be followed, with some modifications, by the various authorities regulating GMP around the world. This definition also has been adopted for the validation business, manufacturing and labora- tory computer systems. The need to validate computer systems formally began in 1979 when the U.S.A. intro- duced GMP regulatory legislation which specifically referred to automation equipment. GMP is enforced by national regulatory authorities who can prevent the sale of a product in their respective country if they consider its manufacture not to be GMP compliant. Validation for GMP is a license-to-operate issue. Over the last three decades, the manufacturing industry has increasingly used computer systems to control manufacturing processes for improved per- formance and product quality. This policy is often embedded in corporate strategy. Computer systems, however, by the nature of their complexity are susceptible to development and operational deficiencies which can adversely affect their control ability and effect product safety, quality and efficacy. Common examples of such deficiencies include poor specification capture, design errors, poor testing and poor maintenance practice. The potentially devastating outcome of GMP noncompliance of computer systems was demonstrated in 1988 when deficient software in data management system controlling a blood bank could have led to the issue of AIDS-infected blood. Additionally, computer systems can endanger public health through the manu- facture and release of drug products with deficient quality attributes. The first widely publicized FDA citation for computer validation noncompliance occurred in 1985; however, as early as 1982, the FDA was publicly stating that it was “nervous” if computer systems were used without being validated. In 1983, the FDA issued the Guide to Inspection of Computerized Systems in Drug Processing, Technical Report, Reference Materials and Training Aids for Investigators which became known as the “Blue Book.” This publication guided inspectors on what to accept as validation evidence for computer systems. The Blue Book formally introduced the antici- pation of a life-cycle approach to validation. The aim was to build in quality (QA) rather than rely on testing in quality (quality control). Responding to the FDA’s proactive position on computer systems validation, the PMA formed a Computer Systems Validation Committee to represent and coordinate the industry’s viewpoint. The results were a joint FDA/PMA Conference in 1984 discussing computer systems validation and in the following year the publication of an industry perspective. The publi- cation presented an approach for validation for both new and existing computer systems. GMP legislation is unusual in that it is equally applied to new production facilities and to production facilities built entirely or partially before the legislation (including amendments) was enforced. Throughout the 1980s, computer systems validation was debated primarily in the U.S.A. Ken Chapman published a paper covering this period during which the FDA gave advice on the following GMP issues: & Input/output checking & Batch records & Applying GMP to hardware and software & Supplier responsibility & Application software inspection & FDA investigation of computer systems & Software development activities In addition, since the end of 1980s, the FDA and the pharmaceutical industry have debated the GMP require- ments and the practicalities of electronic signatures. A resolution was achieved which became the FDA’s proposed regulation. Complementing the U.S. GMP guidance, the European Commission and authorities in Australia both issued GMP codes of practice in 1989 and 1990 respect- ively. The European code known as the “Orange Guide” was later issued in 1991 as a Directive superseding Abbreviations used in this chapter: AZT, International association for Pharmaceutical Technology; cGMP, current good manufacturing practice; CRT, cathode ray tube; DQ, design qualification; EU, European Union; FAT, factory acceptance test; FDA, Food and Drug Administration; GAMP, good automated manufacturing practice; GMP, good manufacturing practice; IQ, installation qualifi- cation; ISPE, International Society of Pharmaceutical Engineering; MCA, Medicines Control Agency; OQ, operational qualification; PDA, Parenteral Drug Association; PLC, programmable logic controller; PMA, Pharmaceutical Manufactures Association; PQ, performance qualification; QA, quality assurance; SCADA, super- visory control and data acquisition; SQ, system qualification or specification qualification; URS, user requirements specification.

Upload: jpabloqf

Post on 25-Dec-2015

26 views

Category:

Documents


2 download

DESCRIPTION

Computerized Systems Validation

TRANSCRIPT

Page 1: Computerized Systems Validation

46

Computerized Systems ValidationSaeed TafreshiIntelitec Corporation, Irvine, California, U.S.A.

The concept of validation was developed in the 1970s andis widely credited to Ted Byers who was then AssociateDirector of Compliance at the U.S. FDA. The concept wasfocused on:

Establishing documented evidence which provides ahigh degree of assurance that a specific process willconsistently produce a product meeting its predeter-mined specifications and quality attributes.

This concept continues to be followed, with somemodifications, by the various authorities regulating GMParound the world. This definition also has been adoptedfor the validation business, manufacturing and labora-tory computer systems. The need to validate computersystems formally began in 1979 when the U.S.A. intro-duced GMP regulatory legislation which specificallyreferred to automation equipment. GMP is enforced bynational regulatory authorities who can prevent the saleof a product in their respective country if they consider itsmanufacture not to be GMP compliant. Validation forGMP is a license-to-operate issue.

Over the last three decades, the manufacturingindustry has increasingly used computer systems tocontrol manufacturing processes for improved per-formance and product quality. This policy is oftenembedded in corporate strategy. Computer systems,however, by the nature of their complexity are susceptibleto development and operational deficiencies which canadversely affect their control ability and effect productsafety, quality and efficacy. Common examples of suchdeficiencies include poor specification capture, designerrors, poor testing and poor maintenance practice.

The potentially devastating outcome of GMPnoncompliance of computer systems was demonstratedin 1988 when deficient software in data managementsystem controlling a blood bank could have led to theissue of AIDS-infected blood. Additionally, computer

systems can endanger public health through the manu-facture and release of drug products with deficientquality attributes.

The first widely publicized FDA citation forcomputer validation noncompliance occurred in 1985;however, as early as 1982, the FDA was publicly statingthat it was “nervous” if computer systems were usedwithout being validated. In 1983, the FDA issued theGuide to Inspection of Computerized Systems in DrugProcessing, Technical Report, Reference Materials andTraining Aids for Investigators which became known asthe “Blue Book.” This publication guided inspectors onwhat to accept as validation evidence for computersystems. The Blue Book formally introduced the antici-pation of a life-cycle approach to validation. The aim wasto build in quality (QA) rather than rely on testing inquality (quality control).

Responding to the FDA’s proactive position oncomputer systems validation, the PMA formed aComputer Systems Validation Committee to representand coordinate the industry’s viewpoint. The resultswere a joint FDA/PMA Conference in 1984 discussingcomputer systems validation and in the following yearthe publication of an industry perspective. The publi-cation presented an approach for validation for both newand existing computer systems. GMP legislation isunusual in that it is equally applied to new productionfacilities and to production facilities built entirely orpartially before the legislation (including amendments)was enforced.

Throughout the 1980s, computer systems validationwas debated primarily in the U.S.A. Ken Chapmanpublished a paper covering this period during whichthe FDA gave advice on the following GMP issues:& Input/output checking& Batch records& Applying GMP to hardware and software& Supplier responsibility& Application software inspection& FDA investigation of computer systems& Software development activities

In addition, since the end of 1980s, the FDA and thepharmaceutical industry have debated the GMP require-ments and the practicalities of electronic signatures. Aresolution was achieved which became the FDA’sproposed regulation.

Complementing the U.S. GMP guidance, theEuropean Commission and authorities in Australia bothissued GMP codes of practice in 1989 and 1990 respect-ively. The European code known as the “Orange Guide”was later issued in 1991 as a Directive superseding

Abbreviations used in this chapter: AZT, International association forPharmaceutical Technology; cGMP, current good manufacturingpractice; CRT, cathode ray tube; DQ, design qualification; EU,European Union; FAT, factory acceptance test; FDA, Food andDrug Administration; GAMP, good automated manufacturingpractice; GMP, good manufacturing practice; IQ, installation qualifi-cation; ISPE, International Society of Pharmaceutical Engineering;MCA, Medicines Control Agency; OQ, operational qualification;PDA, Parenteral Drug Association; PLC, programmable logiccontroller; PMA, Pharmaceutical Manufactures Association; PQ,performance qualification; QA, quality assurance; SCADA, super-visory control and data acquisition; SQ, system qualification orspecification qualification; URS, user requirements specification.

Page 2: Computerized Systems Validation

member state GMP legislation and included an annexcovering computerized systems.

In most countries, GMP has been interpretive and toprosecute a pharmaceutical manufacturing a court mustbe convinced that the charges reflect the intent to floutgoverning legislation. In the U.S.A., however, a courtdeclaratory judgment determined supplementary GMPinformation to be substantive. The net effect was that theFDA’s advisory opinions became binding on the Agency.In August of 1990, the FDA announced that it no longerconsidered advisory opinions binding on the groundsthat Counsel considered such restrictions unconstitu-tional. Hence, the FDA interpretation of the regulationsin Compliance Policy Guides, Guide to Investigators,and other publications by FDA authors becamenonbinding.

Computer systems validation also became a highprofile industry issue in Europe in 1991 when severalEuropean manufacturers and products were temporarilybanned from the U.S.A. for computer systems noncom-pliance. The computer systems in question includedautoclave PLCs and SCADA systems. The position ofthe FDAwas clear; the manufacturer had failed to satisfytheir “concerns” that computer systems should:& Perform accurately and reliably& Be secure from unauthorized or inadvertent changes& Provide for adequate documentation of the process

The manufacturers thought they had satisfiedthe requirements of the existing GMP legislations, butthey had not satisfied the FDA’s expectations of GMP.Hence the adoption of cGMP to signify the latest under-standing of the validation practices and standardsexpected by the regulatory authorities began.

In 1991, the U.K. Pharmaceutical IndustryComputer Systems Validation Forum (known as theU.K. FORUM) was established to facilitate the exchangeof validation knowledge and the development of astandard industry guide for computer systems vali-dations. At this time suppliers were on the wholestruggling to understand and implement the variousinterpretations and requirements of GMP presented bythe manufacturers. ISO 9000 and TickIT accreditation forquality management provided a good basis for vali-dation, but it does not fully satisfy GMP requirements.Then, the U.K. FORUM’s guide came to fruition and waslaunched as a first draft within the U.K. The guide is oftenreferred to as the GAMP guide.

Meanwhile two experienced GMP regulatoryinspectors, Ron Tetzlaff and Tony Trill, publishedpapers respectively presenting the FDA’s and U.K.’sMCA inspection practice for computer systems. Thesepapers presented a comprehensive perspective on thecurrent validation expectations of GMP regulatoryauthorities. Topics covered included:& Life-cycle approach& Quality management& Procedures& Training& Validation protocols& Qualification evidence& Change control& Audit trail& Ongoing evaluation

The pharmaceutical industry in search of a commonapproach to computer systems validation began incor-porating these topics. Nevertheless, the FDA and MCAcontinue to encounter instances of noncompliance prac-tice based on:& Incomplete documents& Insufficient detail in documents& Missing documentary evidence

There was a clear need for guidance and standardson computer systems validation and early in 1995 therewere four milestones of significance to practitioners:& The U.S.A. proposed new GMP legislation affecting

electronic records and electronic signatures.& After 16 years the U.S.A. amended its legislation

affecting computer validation, making a minorconcession concerning the degree of input/outputvalidation required for reliable computer systems.

& The U.S. PDA presented a manufacturer’s guide tocomplement the PMA life cycle.

& The U.K. FORUM issued a revised draft of theirsupplier guide for European comment.These initiatives helped the manufacturers and

suppliers meet the challenge to validate computersystems effectively and efficiently. The initiatives whichfurther clarified the requirements of validation included:& The U.K. FORUM’s investigation into the benefits of

supplier audits shared by a number ofparticipant manufacturers.

& The German APV (Information Technology Section)guide to Annex 11 of the European United GMPDirective regarding computerized systems.

& The German GMA Committee 5.8 and NAMURCommittee 1.9 joint working group’s recommen-dations for computer systems validation.

& The coordination of the German initiatives with theU.K. FORUM supplier guide, and possibly the U.S.PDA manufacturer’s guide, as announced at the ISPEcomputer validation seminar in Amsterdam in Marchof 1995.What is clear to date is the mutual benefit of

regulators, manufacturers and suppliers workingtogether towards a common GMP goal. GMP, whilefacilitating improvements to manufacturing per-formance, also is integral to the continuing highstanding of the pharmaceutical industry.

In order for the industry to follow a common pathin complying with the cGMP guidelines related tocomputer control systems, there is a need to understandthe basics of proper system development and considerthe overall cost into building a true business case. Indoing so, it is necessary to follow the stages in sequencefor the validation of a computerized control system toFDA requirements and their relationship to the develop-ment and implementation stages of an automationproject.

The Quality System regulation requires that “whencomputers or automated data processing systems areused as part of production or the quality system, themanufacturer shall validate computer software for itsintended use according to an established protocol.” Thishas been a regulatory requirement for GMP since 1978.

In addition to the above validation requirement,computer systems that implement part of a regulatedmanufacturer’s production processes or quality system

608 VIII: COMPUTERIZED SYSTEMS

Page 3: Computerized Systems Validation

(or that are used to create and maintain records requiredby any other FDA regulation) are subject to the ElectronicRecords, Electronic Signatures regulation. This regulationestablishes additional security, data integrity, and vali-dation requirements when records are created ormaintained electronically. These additional Part 11requirements should be carefully considered andincluded in system requirements and software require-ments for any automated record keeping systems. Systemvalidation and software validation should demonstratethat all Part 11 requirements have been met.

Computers and automated equipment are usedextensively throughout Pharmaceutical, Biotech,Medical Device, and Medical Gas industries in areassuch as design, laboratory testing and analysis, productinspection and acceptance, production and processcontrol, environmental controls, packaging, labeling,traceability, document control, complaint management,and many other aspects of the quality system. Increas-ingly, automated plant floor operations have involvedextensive use of embedded systems in& PLCs& digital function controllers& statistical process control& supervisory control and data acquisition& robotics& human–machine interfaces& input/output devices& computer operating systems

Computerized operations are now common in FDAregulated industries. Small “minicomputer” systems arebeing used, sometimes in conjunction with larger compu-ters, to control batching operations, maintain formulafiles and inventories, monitor process equipment, checkequipment calibration, etc. Themedical device industry ispresently utilizing automatic test sets controlled bycomputers. In this application the computer is reliedupon to make the decision as to whether a particulartest parameter is within a specific tolerance. The operatordoes not see the values of the parameters measured, butmerely receives a green or red light indicating a go/no gosituation. Products are accepted or rejected on this basis.In order to evaluate and/or report the adequacy of anycomputer-controlled processes or tests, the basics ofcomputer construction and operation must be under-stood. The entire computer control system has beensimplified as follows.

A computer is a machine and like all othermachines is normally used because it performsspecific tasks with greater accuracy and more efficiencythan people. Computers accomplish this by having thecapacity to receive, retain, and give up large volumes ofdata and process it in a very short time. An under-standing of computer operation, and the ability to use acomputer, does not require a detailed knowledge of eitherelectronics or the physical hardware construction. Anoverall view of the computer organization with emphasison function is sufficient.

There are basically two types of computers, analogand digital. The analog computer does not computedirectly with numbers. It accepts electrical signals ofvarying magnitude (analog signals) which in practicaluse are analogous to or represent some continuousphysical magnitude such as pressure, temperature, etc.

Analog computers are sometimes used for scientific,engineering and process-control purposes. In themajority of industry applications used today, analogvalues are converted to digital form by an analog-to-digital converter and processed by digital computers.

The digital computer is the general use computerused for manipulating symbolic information. In mostapplications the symbols manipulated are numbers andthe operations performed on the symbols are the stan-dard arithmetical operations. Complex problem solvingis achieved by basic operations of addition, subtraction,multiplication and division.

A digital computer is designed to accept and storeinstructions (program), accept information (data) andprocess the data as specified in the program anddisplay the results of the processing in a selectedmanner. Instructions and data are in coded form thecomputer is designed to accept. The computer performsautomatically and in sequence according to the program.

The computer is a collection of interconnectedelectromechanical devices (hardware) directed by acentral control unit. The central control unit is thecontrolling device that supervises the sequence of activi-ties that take place in all parts of the computer. Classically,the hardware consists of the mainframe (computer) forcomputation, storage and control, and peripheral devices(input–output devices) for entering raw data and printingor displaying the output. Input data may be entered intothe computer by teletypewriters, magnetic tape, punchedtape, card readers, etc. Output may be displayed in theform of a hardcopy printout, magnetic tape, CRT, etc. Thetwo units of input and output are often joined andreferred to as input/output or simply I/O. A computerterminal with a CRT display is an example of a combinedInput/Output device.

Equally important as hardware in the effective useof the digital computer is the software. The numerouswritten programs and/or routines that dictate the processsequence the computer will follow are called software. Acomputer can be programmed to do almost any problemthat can be “defined.” Defined means that the solution ofa problem must be reduced to a series of steps that can bewritten as a series of computer instructions. In otherwords, the individual steps of the problem must be setup, including the desired level of accuracy, prior to thecomputer processing and solving the problem. Thecomputer must be directed or commanded by a preciselystated set of commands or program. Until a program isprepared and stored in the computer memory, thecomputer knows absolutely nothing, not even how toreceive input data. The accuracy and validation of theprogram is one of the most important aspects ofcomputer control.

Physical quantities are especially adaptive to binarydigital techniques because most physical quantities canbe expressed as two states: switches are on or off, aquantity level is above or below a set value, holes incards are punched or not punched, electrical voltage orcurrent is positive or negative or above or below a presetvalue. For such applications as process control, the digitalcomputer makes decisions by comparing input data to apredetermined value. The computer takes a course ofaction dependent on whether the input data is greaterthan, equal to, or less than the predetermined value.

46: COMPUTERIZED SYSTEMS VALIDATION 609

Page 4: Computerized Systems Validation

The predetermined value and course of action thecomputer follows is in the form of a program stored inthe computer memory. So, actually the computer does notmake decisions, but merely follows written programinstructions. A printout or display of the actual valuesmeasured may be included as a part of the program.Verification of proper computer operation may be accom-plished in this example by applying known inputs,greater, equal to and less than the predetermined valueand subsequently reviewing the results.

When validating a computer control system,particular attention must be made to following of estab-lished procedures and the documentation requiredduring each stage to ensure that proper and sufficientdocumented evidence is provided to support validationinspection by the FDA.

The FDA has issued two validation definitionswhich state the following:1. “Establishing documented evidence that a system

does what it is designed to do.”2. “Establishing documented evidence which provides

a high degree of assurance that a specific process willconsistently produce a product meeting its predeter-mined specifications and quality attributes.”The FDA audits against compliance with cGMP

requirements. Rigid procedures are required to befollowed and those procedures must generate sufficientdocumentation to ensure that traceability and account-ability of information (an audit trail) is maintained.

The FDA does not provide certification for acompany and its procedures nor does it approve whatdocumentation should be produced. The company isresponsible for demonstrating that procedures arefollowed and associated documentation generated tosupport the manufacture of the company’s products.

The FDA’s position was made clear in the followingstatement made by Ronald Tetzlaff (when he wasemployed by the FDA) in Pharmaceutical Technology,April 1992, which states that “Unless firms have docu-mented evidence to ensure the proper performance of avendor’s software, the FDA cannot consider the auto-mated system to be validated.”

Therefore it is important that companies haveapproved Quality Systems in place that ensure thatprocedures are followed and an audit trail is maintained.

COMPUTERIZED SYSTEM VALIDATIONQUALITY SYSTEM

The validation of a computerized control system to FDArequirements can be broken down into a number ofphases which are interlinked with the overall projectprogram. A typical validation program for a controlsystem also includes the parallel design and developmentof control and monitoring instrumentation. A typicalQuality System includes the following phases.

Definition PhaseValidation starts at the definition (conceptual design)phase because the FDA expects to see documentaryevidence that the chosen system vendor and the softwareproposed meets the customer’s predefined selectioncriteria.

Vendor acceptance criteria, which must be definedby the customer, should typically include the following.

The Vendor’s Business Practices& Vendor certification to an approved QA standard.

Certification may be a consideration when selectinga systems vendor. Initiative which promotes the use ofinternational standards to improve the qualitymanagement of software development shallbe considered.

& Vendor Audit by the customer to ensure companystandards and practices are known and arebeing followed.

& Vendor end user support agreements.& Vendor financial stability.& Biography for the vendor’s proposed project

personnel (interviews also should be considered).& Checking customer references and visiting their sites

should be considered.

The Vendor’s Software Practices& Software development methodology& Vendor’s experience in using the project software

including: operating system software; applicationsoftware; “off-the-shelf” and support softwarepackage (e.g., archiving, networking, batch software).

& Software performance and development history& Software updates& The vendormust make provision for source code to be

accessible to the end user (e.g., have an escrow orsimilar agreement) and should provide a statement tothis effect. Escrow is the name given to a legallybinding agreement between a supplier and acustomer which permits the customer access tosource code, which is stored by a third party organiz-ation. The agreement also permits the customer accessto the source code should the supplierbecome bankrupt.Vendor acceptance can be divided into these areas:

& Vendor prequalification (to select suitable vendors toreceive the Tender enquiry package)

& Review of the returned Tenders& Audit of the most suitable vendor(s)

Other documentation produced during thedefinition phase includes the URS, standard specifi-cations and Tender support documentation.

The Tender enquiry package must be reviewed bythe customer prior to issue to selected vendors. Thisreview, called SQ, is carried out to ensure that thecustomer’s technical and quality requirements arefully addressed.

System Development PhaseThe system development phase is the period from Tenderaward to delivery of the control system to site. It can besubdivided into four subphases:& Design agreement& Design and development& Development testing& Predelivery or FAT

The design agreement phase comprises thedevelopment and approval of the system vendor’s

610 VIII: COMPUTERIZED SYSTEMS

Page 5: Computerized Systems Validation

Functional Design Specification, its associated FAT,Specification and the Quality Plan for the project. Theseform the basis of the contractual agreement between thesystem vendor and the customer.

The design and development phase involves thedevelopment and approval of the detailed system(hardware and software) design and testing specifi-cations. The software specifications comprise theSoftware Design Specification and its associated SoftwareModule Coding. The hardware specifications comprisethe Computer Hardware Design Specification and itsassociated Hardware Test Specification and ComputerHardware Production.

The development testing phase comprises the struc-tured testing of the hardware and software against thedetailed design specifications starting from the lowestlevel and working up to a fully integrated system. Thesystems vendor must follow a rigorous and fully docu-mented testing regime to ensure that each item ofhardware and software module developed or modifiedperforms the function(s) required without degradingother modules or the systems as a whole.

The predelivery acceptance phase comprises theFAT, which is witnessed by the customer, and the DQreview by the customer to ensure the system designmeetstechnical (system functionality and operability) andquality (auditable, structured documentation) objectives.

Throughout the system development phase, thesystems vendor should be subject to a number ofquality audits by the customer, or their nominatedagents, to ensure that the Quality Plan for the project isbeing complied with and that all documentation is beingcompleted correctly. In addition, the vendor shouldconduct internal audits, and the reports should be avail-able for inspection by the customer. The systems vendoralso must enforce a strict change control procedure toenable all mediations and changes to the system to bethoroughly designed, tested, and documented. Changecontrol is a formal system by which qualified representa-tives of appropriate disciplines review proposed or actualchanges that might affect a validated status. The intent isto determine the need for action that would ensure anddocument that the component or system is maintained ina validated state.

The audit trail documentation introduced andmaintained by the Quality Plan and the test documen-tation can be used as evidence by the customer during theFDA’s inspections that the system meets the functionalityrequired. In particular, the test and change control docu-mentation will demonstrate a positive, thorough, andprofessional approach to validation.

Commissioning and In-Place Qualification PhaseThe commissioning and qualification phase encompassesthe System Commissioning on site, Site AcceptanceTesting, IQ, OQ, and, where applicable, PQ activities forthe project. The most important part of this phase must beidentified as qualification based on system specificationdocumentation. The system installation and operationmust be confirmed against its documents. All systemadjustments and changes occuring in this phase mustresult in updating of the corresponding specificationdocument. It is an assurance when building a reliable

system base document in support of a life cycle approachduring a phase that most last minute changes are discov-ered. No benefit of any life cycle approach can beobtained when the system and its documentation donot match after completion of this phase.

Ongoing Maintenance PhaseThe term maintenance does not mean the same whenapplied to hardware and software. The operationalmaintenance of hardware and software are differentbecause their failure/error mechanisms are different.Hardware maintenance typically includes preventivehardware maintenance actions, component replacement,and corrective changes. Software maintenance includescorrective, perfective, and adaptive maintenance but doesnot include preventive maintenance actions or softwarecomponent replacement.

Changes made to correct errors and faults in thesoftware are corrective maintenance. Changes made tothe software to improve the performance, maintainability,or other attributes of the software system are perfectivemaintenance. Software changes to make the softwaresystem usable in a changed environment areadaptive maintenance.

When changes are made to a software system,sufficient regression analysis and testing should beconducted to demonstrate that portions of the softwarenot involved in the change were not adversely impacted.This is in addition to testing that evaluates the correctnessof the implemented change(s).

The specific validation effort necessary for eachchange is determined by the type of change, the develop-ment products affected, and the impact of those productson the operation of the system. All proposed modifi-cations, enhancements, or additions to the systemshould be assessed to determine the effect each changewould have on the entire system. This information shoulddetermine the extent to which verification and/or vali-dation tasks need to be iterated.

Documentation should be carefully reviewed todetermine which documents have been impacted by achange. All approved documents (e.g., specifications,user manuals, drawings, etc.) that have been affectedshould be updated in accordance with the applicablesite or corporate change management procedures.Specifications should be updated before any changeis implanted.

SOFTWARE VALIDATION

The Quality System regulation treats “verification” and“validation” as separate and distinct terms. On the otherhand, many software engineering journal articles andtextbooks use the terms verification and validation inter-changeably, or in some cases refer to software“verification, validation, and testing (VV&T)” as if it isa single concept, with no distinction among thethree terms.

Software verification provides objective evidencethat the design outputs of a particular phase of thesoftware development life cycle meet all of the specifiedrequirements for that phase. Software verification looksfor consistency, completeness, and correctness of the

46: COMPUTERIZED SYSTEMS VALIDATION 611

Page 6: Computerized Systems Validation

software and its supporting documentation, as it is beingdeveloped, and provides support for a subsequent con-clusion that software is validated. Software testing is oneof many verification activities intended to confirm thatsoftware development output meets its input require-ments. Other verification activities include various staticand dynamic analyses, code and document inspections,walkthroughs, and other techniques.

Software validation is a part of the design vali-dation for the project, but is not separately defined inthe Quality System regulation. FDA considers softwarevalidation to be “confirmation by examination and pro-vision of objective evidence that software specificationsconform to user needs and intended uses, and that theparticular requirements implemented through softwarecan be consistently fulfilled.” In practice, software vali-dation activities may occur both during as well as at theend of the software development life cycle to ensure thatall requirements have been fulfilled. Since software isusually part of a larger hardware system, the validation ofsoftware typically includes evidence that all softwarerequirements have been implemented correctly andcompletely and are traceable to system requirements. Aconclusion that software is validated is highly dependentupon comprehensive software testing, inspections,analyses, and other verification tasks performed at eachstage of the software development life cycle.

Software verification and validation are difficult innature because a developer cannot test forever, and it ishard to know how much evidence is enough. In largemeasure, software validation is a matter of developing a“level of confidence” that the application meets allrequirements and user expectations for the softwareautomated functions. Measures such as defects found inspecifications documents, estimates of defects remaining,testing coverage, and other techniques are all used todevelop an acceptable level of confidence before shippingthe product. The level of confidence, and therefore thelevel of software validation, verification, and testing effortneeded, will vary depending upon the application.

Many firms have asked for specific guidance onwhat the FDA expects them to do to ensure compliancewith the Quality System regulation with regard to soft-ware validation. Validation of software has beenconducted in many segments of the software industryfor almost three decades. Due to the great variety ofpharmaceuticals, medical devices, processes, and manu-facturing facilities, it is not possible to state in onedocument all of the specific validation elements that areapplicable. However, a general application of severalbroad concepts can be used successfully as guidance forsoftware validation. These broad concepts provide anacceptable framework for building a comprehensiveapproach to software validation.

Requirements SpecificationWhile the Quality System regulation states that designinput requirements must be documented, and thatspecified requirements must be verified, the regulationdoes not further clarify the distinction between the terms“requirement” and “specification.” A requirement can beany need or expectation for a system or for its software.Requirements reflect the stated or implied needs of the

customer, and may be market-based, contractual, orstatutory, as well as an organization’s internal require-ments. There can be many different kinds of requirements(e.g., design, functional, implementation, interface, per-formance, or physical requirements). Softwarerequirements are typically derived from the systemrequirements for those aspects of system functionalitythat have been allocated to software. Software require-ments are typically stated in functional terms and aredefined, refined, and updated as a development projectprogresses. Success in accurately and completely docu-menting software requirements is a crucial factor insuccessful validation of the resulting software.

A specification is defined as “a document that statesrequirements.” It may refer to or include drawings,patterns, or other relevant documents and usuallyindicates the means and the criteria whereby conformitywith the requirement can be checked. There are manydifferent kinds of written specifications, e.g., systemrequirements specification, software requirementsspecification, software design specification, software testspecification, software integration specification, etc. All ofthese documents establish “specified requirements” andare design outputs for which various forms of verificationare necessary.

A documented software requirements specifica-tion provides a baseline for both validation andverification. The software validation process cannot becompleted without an established software requirementsspecification.

Defect PreventionSoftware quality assurance needs to focus on preventingthe introduction of defects into the software developmentprocess and not on trying to “test quality into” thesoftware code after it is written. Software testing is verylimited in its ability to surface all latent defects in soft-ware code. For example, the complexity of most softwareprevents it from being exhaustively tested. Softwaretesting is a necessary activity. However, in most casessoftware testing by itself is not sufficient to establishconfidence that the software is fit for its intended use.In order to establish that confidence, software developersshould use a mixture of methods and techniques toprevent software errors and to detect software errorsthat do occur. The “best mix” of methods depends onmany factors including the development environment,application, size of project, language, and risk.

Time and EffortTo build a case that the software is validated requires timeand effort. Preparation for software validation shouldbegin early, i.e., during design and development planningand design input. The final conclusion that the software isvalidated should be based on evidence collected fromplanned efforts conducted throughout the softwarelife cycle.

Software Life CycleSoftware validation takes place within the environmentof an established software life cycle. The software lifecycle contains software engineering tasks and documen-tation necessary to support the software validation effort.

612 VIII: COMPUTERIZED SYSTEMS

Page 7: Computerized Systems Validation

In addition, the software life cycle contains specificverification and validation tasks that are appropriate forthe intended use of the software. No one life cycle modelcan be recommended for all software development andvalidation project, but an appropriate and practical soft-ware life cycle should be selected and used for a softwaredevelopment project.

PlansThe software validation process is defined and controlledthrough the use of a plan. The software validation plandefines “what” is to be accomplished through the soft-ware validation effort. Software validation plans are asignificant quality system tool. Software validation plansspecify areas such as scope, approach, resources, sche-dules and the types and extent of activities, tasks, andwork items.

ProceduresThe software validation process is executed through theuse of procedures. These procedures establish “how” toconduct the software validation effort. The proceduresshould identify the specific actions or sequence of actionsthat must be taken to complete individual validationactivities, tasks, and work items.

Software Validation After a ChangeDue to the complexity of software, a seemingly smalllocal change may have a significant global system impact.When any change (even a small change) is made to thesoftware, the validation status of the software needs to bere-established. Whenever software is changed, a vali-dation analysis should be conducted not just forvalidation of the individual change but also to determinethe extent and impact of that change on the entire soft-ware system. Based on this analysis, the softwaredeveloper should then conduct an appropriate level ofsoftware regression testing to show that unchanged butvulnerable portions of the system have not beenadversely affected. Design controls and appropriateregression testing provide the confidence that the soft-ware is validated after a software change.

Validation CoverageValidation coverage should be based on the software’scomplexity and safety risk and not on firm size orresource constraints. The selection of validation activities,tasks, and work items should be commensurate with thecomplexity of the software design and the risk associatedwith the use of the software for the specified intendeduse. For lower risk applications, only baseline validationactivities may be conducted. As the risk increases,additional validation activities should be added tocover the additional risk. Validation documentationshould be sufficient to demonstrate that all softwarevalidation plans and procedures have been completedsuccessfully.

Flexibility and ResponsibilitySpecific implementation of these software validationprinciples may be quite different from one applicationto another. The manufacturer has flexibility in choosinghow to apply these validation principles, but retains

ultimate responsibility for demonstrating that the soft-ware has been validated.

Software is designed, developed, validated, andregulated in a wide spectrum of environments, and fora wide variety of applications with varying levels of risk.In each environment, software components from manysources may be used to create the software (e.g., in-housedeveloped software, off-the-shelf software, contract soft-ware, shareware). In addition, software componentscome in many different forms (e.g., application software,operating systems, compilers, debuggers, configurationmanagement tools, and many more). The validation ofsoftware in these environments can be a complex under-taking; therefore, it is appropriate that all of thesesoftware validation principles be considered whendesigning the software validation process. The resultantsoftware validation process should be commensuratewith the safety risk associated with the system, device,or process.

Software validation activities and tasks may bedispersed, occurring at different locations and beingconducted by different organizations. However, regard-less of the distribution of tasks, contractual relations,source of components, or the development environment,the manufacturer retains ultimate responsibility forensuring that the software is validated.

Software validation is accomplished through aseries of activities and tasks that are planned andexecuted at various stages of the software developmentlife cycle. These tasks may be one-time occurrences ormay be iterated many times, depending on the life cyclemodel used and the scope of changes made as thesoftware project progresses.

SOFTWARE LIFE CYCLE ACTIVITIES

Software developers should establish a software life cyclemodel that is appropriate for their product and organiz-ation. The software life cycle model that is selected shouldcover the software from its birth to its retirement. Activi-ties in a typical software life cycle model include thefollowing:& Quality Planning& System Requirements Definition& Detailed Software Requirements Specification& Software Design Specification& Construction or Coding& Testing& Installation& Operation and Support& Maintenance& Retirement

Verification, testing and other tasks that supportsoftware validation occur during each of the aboveactivities. A life cycle model organizes these softwaredevelopment activities in various ways and provides aframework for monitoring and controlling the softwaredevelopment project.

For each of the software life cycle activities, there arecertain “typical” tasks that support a conclusion that thesoftware is validated. However, the specific tasks to beperformed, their order of performance, and the iterationand timing of their performance will be dictated by the

46: COMPUTERIZED SYSTEMS VALIDATION 613

Page 8: Computerized Systems Validation

specific software life cycle model that is selected and thesafety risk associated with the software application. Forvery lowrisk applications, certain tasksmaynot beneededat all. However, the software developer should at leastconsider each of these tasks and should define anddocument which tasks are or are not appropriate fortheir specific application.

Quality PlanningDesign and development planning should culminate in aplan that identifies necessary tasks, procedures foranomaly reporting and resolution, necessary resources,and management review requirements, including formaldesign reviews. A software life cycle model and associ-ated activities should be identified, as well as those tasksnecessary for each software life cycle activity. The planshould include:& The specific tasks for each life cycle activity& Enumeration of important quality factors& Methods and procedures for each task& Task acceptance criteria& Criteria for defining and documenting outputs in

terms that will allow evaluation of their conformanceto input requirements

& Inputs for each task& Outputs from each task& Roles, resources, and responsibilities for each task& Risks and assumptions& Documentation of user needs

Management must identify and provide the appro-priate software development environment and resources.Typically, each task requires personnel as well as physicalresources. The plan should identify the personnel, thefacility and equipment resources for each task, and therole that risk (hazard) management will play. A configu-ration management plan should be developed that willguide and control multiple parallel development activi-ties and ensure proper communications anddocumentation. Controls are necessary to ensure positiveand correct correspondence among all approved versionsof the specifications documents, source code, object code,and test suites that comprise a software system.The controls also should ensure accurate identificationof, and access to, the currently approved versions.

Procedures should be created for reporting andresolving software anomalies found through validationor other activities. Management should identify thereports and specify the contents, format, and responsibleorganizational elements for each report. Procedures alsoare necessary for the review and approval of softwaredevelopment results, including the responsible organiz-ational elements for such reviews and approvals.

RequirementsRequirement development includes the identification,analysis, and documentation of information about theapplication and its intended use. Areas of special import-ance include allocation of system functions tohardware/software, operating conditions, user charac-teristics, potential hazards, and anticipated tasks. Inaddition, the requirements should state clearly theintended use of the software.

The software requirements specification documentshould contain a written definition of the software func-tions. It is not possible to validate software withoutpredetermined and documented software requirements.Typical software requirements specify the following:& All software system inputs& All software system outputs& All functions that the software system will perform& All performance requirements that the software

will meet& The definition of all external and user interfaces, as

well as any internal software-to-system interfaces& How users will interact with the system& What constitutes an error and how errors should

be handled& Required response times& The intended operating environment& All ranges, limits, defaults, and specific values that

the software will accept& All safety related requirements, specifications,

features, or functions that will be implementedin softwareSoftware safety requirements are derived from a

technical risk management process that is closely inte-grated with the system requirements developmentprocess. Software requirement specifications shouldidentify clearly the potential hazards that can resultfrom a software failure in the system as well as anysafety requirements to be implemented in software. Theconsequences of software failure should be evaluated,along with means of mitigating such failures (e.g., hard-ware mitigation, defensive programming, etc.). From thisanalysis, it should be possible to identify the mostappropriate measures necessary to prevent harm.

A software requirements traceability analysisshould be conducted to trace software requirements to(and from) system requirements and to risk analysisresults. In addition to any other analyses and documen-tation used to verify software requirements, a formaldesign review is recommended to confirm that require-ments are fully specified and appropriate beforeextensive software design efforts begin. Requirementscan be approved and released incrementally, but careshould be taken that interactions and interfaces amongsoftware (and hardware) requirements are properlyreviewed, analyzed, and controlled.

DesignThe decision to implement system functionality usingsoftware is one that is typically made during systemdesign. Software requirements are typically derivedfrom the overall system requirements and design forthose aspects in the system that are to be implementedusing software. There are user needs and intended usesfor a finished product, but users typically do not specifywhether those requirements are to be met by hardware,software, or some combination of both. Therefore, soft-ware validation must be considered within the context ofthe overall design validation for the system.

A documented requirements specification rep-resents the user’s needs and intended uses from whichthe product is developed. A primary goal of softwarevalidation is to then demonstrate that all completed

614 VIII: COMPUTERIZED SYSTEMS

Page 9: Computerized Systems Validation

software products comply with all documented softwareand system requirements. The correctness and complete-ness of both the system requirements and the softwarerequirements should be addressed as part of the designvalidation process for that application. Software vali-dation includes confirmation of conformance to allsoftware specifications and confirmation that all softwarerequirements are traceable to the system specifications.Confirmation is an important part of the overall designvalidation to ensure that all aspects of the design conformto user needs and intended uses.

In the design process, the software requirementsspecification is translated into a logical and physicalrepresentation of the software to be implemented.The software design specification is a description ofwhat the software should do and how it should do it.Due to complexity of the project or to enable persons withvarying levels of technical responsibilities to clearlyunderstand design information, the design specificationmay contain both a high-level summary of the design anddetailed design information. The completed softwaredesign specification constrains the programmer/coderto stay within the intent of the agreed upon requirementsand design. A complete software design specification willrelieve the programmer from the need to make ad hocdesign decisions.

The software design needs to address humanfactors. Use error caused by designs that are eitheroverly complex or contrary to users’ intuitive expec-tations for operation is one of the most persistent andcritical problems encountered by the FDA. Frequently, thedesign of the software is a factor in such use errors.Human factor engineering should be woven into theentire design and development process, includingthe design requirements, analysis, and tests. Safety andusability issues should be considered when developingflow charts, state diagrams, prototyping tools, and testplans. Also, task and function analysis, risk analysis,prototype tests and reviews, and full usability testsshould be performed. Participants from the user popu-lation should be included when applying thesemethodologies.

The software design specification should include:& Software requirements specification, including prede-

termined criteria for acceptance of the software& Software risk analysis& Development procedures and coding guidelines (or

other programming procedures)& Systems documentation (e.g., a narrative or a context

diagram) that describes the systems context in whichthe program is intended to function, including therelationship of hardware, software, and thephysical environment

& Hardware to be used& Parameters to be measured or recorded& Logical structure (including control logic) and logical

processing steps (e.g., algorithms)& Data structures and data flow diagrams& Definitions of variables (control and data) and

description of where they are used& Error, alarm, and warning messages& Supporting software (e.g., operating systems, drivers,

other application software)& Communication links (links among internal modules

of the software, links with the supporting software,links with the hardware, and links with the user)

& Security measures (both physical and logical security)The activities that occur during software design

have several purposes. Software design evaluations areconducted to determine if the design is complete, correct,consistent, unambiguous, feasible, and maintainable.Appropriate consideration of software architecture (e.g.,modular structure) during design can reduce the magni-tude of future validation efforts when software changesare needed. Software design evaluations may includeanalysis of control flow, data flow, complexity, timing,sizing, memory allocation, criticality analysis, and manyother aspects of the design. A traceability analysis shouldbe conducted to verify that the software designimplements all of the software requirements. As a tech-nique for identifying where requirements are notsufficient, the traceability analysis should also verifythat all aspects of the design are traceable to softwarerequirements. An analysis of communication linksshould be conducted to evaluate the proposed designwith respect to hardware, user, and related softwarerequirements. The software risk analysis should be re-ex-amined to determine whether any additional hazardshave been identified and whether any new hazardshave been introduced by the design.

At the end of the software design activity, a FormalDesign Review should be conducted to verify that thedesign is correct, consistent, complete, accurate, andtestable before moving to implement the design.Portions of the design can be approved and releasedincrementally for implementation, but care should betaken that interactions and communication links amongvarious elements are properly reviewed, analyzed, andcontrolled.

Most software development models will beiterative. This is likely to result in several versionsof both the software requirements specification and thesoftware design specification. All approved versionsshould be archived and controlled in accordance withestablished configuration management procedures.

Construction or CodingSoftware may be constructed either by coding(i.e., programming) or by assembling together previouslycoded software components (e.g., from code libraries, off-the-shelf software, etc.) for use in a new application.Coding is the software activity where the detaileddesign specification is implemented as source code.Coding is the lowest level of abstraction for the softwaredevelopment process. It is the last stage in decompositionof the software requirements where module specifi-cations are translated into a programming language.

Coding usually involves the use of a high-levelprogramming language, but may also entail the use ofassembly language (or microcode) for time-criticaloperations. The source code may be either compiled orinterpreted for use on a target hardware platform.Decisions on the selection of programming languagesand software build tools (assemblers, linkers, and compi-lers) should include consideration of the impact onsubsequent quality evaluation tasks (e.g., availability ofdebugging and testing tools for the chosen language).

46: COMPUTERIZED SYSTEMS VALIDATION 615

Page 10: Computerized Systems Validation

Some compilers offer optional levels and commands forerror checking to assist in debugging the code. Differentlevels of error checking may be used throughout thecoding process, and warnings or other messages fromthe compiler may or may not be recorded. However, atthe end of the coding and debugging process, the mostrigorous level of error checking is normally used todocument what compilation errors still remain in thesoftware. If the most rigorous level of error checking isnot used for final translation of the source code, thenjustification for use of the less rigorous translation errorchecking should be documented. Also, for the finalcompilation, there should be documentation of the com-pilation process and its outcome, including any warningsor other messages from the compiler and their resolution,or justification for the decision to leave issues unresolved.

Firms frequently adopt specific coding guidelinesthat establish quality policies and procedures related tothe software coding process. Source code should beevaluated to verify its compliance with specified codingguidelines. Such guidelines should include codingconventions regarding clarity, style, complexity manage-ment, and commenting. Code comments should provideuseful and descriptive information for a module,including expected inputs and outputs, variables refer-enced, expected data types, and operations to beperformed. Source code should also be evaluated toverify its compliance with the corresponding detaileddesign specification. Modules ready for integration andtest should have documentation of compliance withcoding guidelines and any other applicable qualitypolicies and procedures.

Source code evaluations are often implemented ascode inspections and code walkthroughs. Such staticanalyses provide a very effective means to detect errorsbefore execution of the code. They allow for examinationof each error in isolation and can also help in focusinglater dynamic testing of the software. Firms may usemanual (desk) checking with appropriate controls toensure consistency and independence. Source codeevaluations should be extended to verification of internallinkages between modules and layers (horizontal andvertical interfaces) and compliance with their designspecifications. Documentation of the procedures usedand the results of source code evaluations should bemaintained as part of design verification.

Testing by the Software DeveloperSoftware testing entails running software products underknown conditions with defined inputs and documentedoutcomes that can be compared to their predefinedexpectations. It is a time-consuming, difficult, and imper-fect activity. As such, it requires early planning in order tobe effective and efficient.

Test plans and test cases should be created as early inthe softwaredevelopmentprocess as feasible. They shouldidentify the schedules, environments, resources(personnel, tools, etc.), methodologies, cases (inputs,procedures, outputs and expected results), documen-tation, and reporting criteria. The magnitude of effort tobe applied throughout the testing process can be linked tocomplexity, criticality, reliability, and/or safety issues.

Software test plans should identify the particulartasks to be conducted at each stage of development andinclude justification of the level of effort represented bytheir corresponding completion criteria.

An essential element of a software test case is theexpected result. It is the key detail that permits objectiveevaluation of the actual test result. This necessary testinginformation is obtained from the corresponding prede-fined definition or specification. A software specificationdocument must identify what, when, how, why, etc., is tobe achieved with an engineering (i.e., measurable orobjectively verifiable) level of detail in order for it to beconfirmed through testing. The real effort of effectivesoftware testing lies in the definition of what is to betested rather than in the performance of the test.

Once the prerequisite tasks (e.g., code inspection)have been successfully completed, software testingbegins. It starts with unit level testing and concludeswith system level testing. There may be a distinct inte-gration level of testing. A software product should bechallenged with test cases based on its internal structureand with test cases based on its external specification.These tests should provide a thorough and rigorousexamination of the software product’s compliance withits functional, performance, and interface definitionsand requirements.

User Site TestingTesting at the user site is an essential part ofsoftware validation. The Quality System regulationrequires installation and inspection procedures(including testing where appropriate) as well as docu-mentation of inspection and testing to demonstrateproper installation. Likewise, manufacturing equipmentmust meet specified requirements, and automatedsystems must be validated for their intended use.

Terminology regarding user site testing can beconfusing. Terms such as beta test, site validation, useracceptance test, installation verification, and installationtesting have all been used to describe user site testing.The term “user site testing” encompasses all of these andany other testing that takes place outside of the devel-oper’s controlled environment. This testing should takeplace at a user’s site with the actual hardware andsoftware that will be part of the installed system configu-ration. The testing is accomplished through either actualor simulated use of the software being tested within thecontext in which it is intended to function.

User site testing should follow a predefined writtenplan with a formal summary of testing and a record offormal acceptance. Documented evidence of all testingprocedures, test input data, and test results shouldbe retained.

There should be evidence that hardware and soft-ware are installed and configured as specified. Measuresshould ensure that all system components are exercisedduring the testing and that the versions of these com-ponents are those specified. The testing plan shouldspecify testing throughout the full range of operatingconditions and should specify continuation for a suf-ficient time to allow the system to encounter a widespectrum of conditions and events in an effort to detect

616 VIII: COMPUTERIZED SYSTEMS

Page 11: Computerized Systems Validation

any latent faults that are not apparent during morenormal activities.

During user site testing, records should be main-tained of both proper system performance and anysystem failures that are encountered. The revision of thesystem to compensate for faults detected during this usersite testing should follow the same procedures andcontrols as for any other software change.

The developers of the software may or may not beinvolved in the user site testing. If the developers areinvolved, they may seamlessly carry over to the user’ssite the last portions of design-level systems testing. If thedevelopers are not involved, it is all the more importantthat the user have persons who understand the import-ance of careful test planning, the definition of expectedtest results, and the recording of all test outputs.

Maintenance and Software ChangesIn addition to software verification and validation tasksthat are part of the standard software developmentprocess, the following additional maintenance tasksshould be addressed.

Software Validation Plan RevisionFor software that was previously validated, the existingsoftware validation plan should be revised to support thevalidation of the revised software. If no previous softwarevalidation plan exists, such a plan should be establishedto support the validation of the revised software.

Anomaly EvaluationSoftware organizations frequently maintain documen-tation, such as software problem reports that describesoftware anomalies discovered and the specific correctiveaction taken to fix each anomaly. Too often, however,mistakes are repeated because software developers donot take the next step to determine the root causes ofproblems and make the process and procedural changesneeded to avoid recurrence of the problem. Softwareanomalies should be evaluated in terms of their severityand their effects on system operation and safety, but theyshould also be treated as symptoms of process defici-encies in the quality system. A root-cause analysis ofanomalies can identify specific quality system defici-encies. Where trends are identified (e.g., recurrence ofsimilar software anomalies), appropriate corrective andpreventive actions must be implemented and docu-mented to avoid further recurrence of similarquality problems.

Problem Identification and Resolution TrackingAll problems discovered during maintenance of the soft-ware should be documented. The resolution of eachproblem should be tracked to ensure it is fixed, forhistorical reference, and for trending.

Task IterationFor approved software changes, all necessary verificationand validation tasks should be performed to ensure thatplanned changes are implemented correctly, all docu-mentation is complete and up to date, and nounacceptable changes have occurred in softwareperformance.

BENEFITS OF QUALIFICATION

Software validation is a critical tool used to assure thequality of software and software automated operations.Software validation can increase the usability andreliability of the application, resulting in decreasedfailure rates, fewer recalls and corrective actions, lessrisk to patients and users, and reduced liability tomanufacturers. Software validation can also reducelong-term costs by making it easier and less costly toreliably modify software and revalidate softwarechanges. Software maintenance can represent a verylarge percentage of the total cost of software over itsentire life cycle. An established comprehensive softwarevalidation process helps to reduce the long-term cost ofsoftware by reducing the cost of validation for eachsubsequent release of the software. The level of validationeffort should be commensurate with the risk posed by theautomated operation. In addition to other risk factors,such as the complexity of the process software and thedegree to which the manufacturer is dependent upon thatautomated process to produce a safe and effectiveproduct, determine the nature and extent of testingneeded as part of the validation effort. Documentedrequirements and risk analysis of the automated processhelp to define the scope of the evidence needed to showthat the software is validated for its intended use.

An Abbreviated Computer Validation History& 1978—Validation for GMP concept developed by FDA& 1979—The U.S.A. issue Federal Regulations for GMP

including validation of automation equipment& 1983—FDA Blue Book for computer system validation& 1985—U.S. PMA published guideline for validating

new and existing computer systems& 1987—FDA technical report on developing

computer systems& 1988—FDA conference paper on inspecting

computer systems& 1989—EU Code for GMP including Annex 11 on

computerized systems& 1991—EU Directive for GMP based on EU Code

for GMP& 1994—U.K. FORUM draft guidelines to suppliers& 1994—The U.S.A. propose new electronic record and

electronic signatures GMP regulations& 1994—GAMP first draft Distributed to U.K.

for comments& 1995—U.S. PDA publish validation guideline

for manufacturers& 1995—The U.S.A. amend GMP regulations

affecting automation& 1995—U.K. FORUM revise draft guidelines

to suppliers& March of 1997, FDA issued final part 11 regulations& First Draft July, 2000 (GAMP Europe)& Final Draft March, 2001 (GAMPAmericas)& Version 1 Quarter 2, 2001 (Co-Publication with PDA)& GAMP4, December 2001, major revision and new

content in line with regulatory andtechnological development

& February 4, 2003, FDAwithdrew the draft guidance forindustry, 21 CFR Part 11.

46: COMPUTERIZED SYSTEMS VALIDATION 617

Page 12: Computerized Systems Validation

BIBLIOGRAPHYFood and Drug Administration ReferencesGlossary of Computerized System and Software Development

Terminology, Division of Field Investigations, Office ofRegional Operations, Office of Regulatory Affairs, Foodand Drug Administration, August 1995.

Guideline onGeneral Principles of Process Validation, Center forDrugs and Biologics, and Center For Devices and Radio-logical Health, Food and Drug Administration, May 1987.

Technical Report, Software Development Activities, Division ofField Investigations, Office of Regional Operations, Office ofRegulatoryAffairs, FoodandDrugAdministration, July1987.

Other Government ReferencesAdrion WR, Branstad MA, Cherniavsky JC. NBS Special Publi-

cation 500-75, Validation, Verification, and Testing ofComputer Software, Center for Programming Science andTechnology, Institute for Computer Sciences and Tech-nology, National Bureau of Standards, U.S. Department ofCommerce, February 1981.

Powell PB, ed. NBS Special Publication 500-98, Planning forSoftware Validation, Verification, and Testing, Centerfor Programming Science and Technology, Institute forComputer Sciences and Technology, National Bureau ofStandards, U.S. Department of Commerce, November 1982.

Wallace DR, ed. NIST Special Publication 500–235, StructuredTesting: A Testing Methodology Using the CyclomaticComplexity Metric. Computer Systems Laboratory,National Institute of Standards and Technology, U.S.Department of Commerce, August 1996.

International and National Consensus StandardsIEC 61506:1997, Industrial process measurement and control—

Documentation of application software. International Elec-trotechnical Commission, 1997.

IEEE Std 1012-1986, Software Verification and Validation Plans,Institute for Electrical and Electronics Engineers, 1986.

IEEE Standards Collection, Software Engineering, Institute ofElectrical and Electronics Engineers, Inc., 1994. ISBN1-55937-442-X.

ISO 9000-3:1997, Quality management and quality assurancestandards—Part 3: Guidelines for the application of ISO9001:1994 to the development, supply, installation andmaintenance of computer software. International Organiz-ation for Standardization, 1997.

ISO/IEC 12207:1995, Information technology—Software lifecycle processes, Joint Technical Committee.

ISO/IEC JTC 1. Subcommittee SC 7, International Organizationfor Standardization and International ElectrotechnicalCommission, 1995.

Production Process Software ReferencesGrigonis GJ, Jr., Subak EJ, Jr., MichaelW. Validation key practices

for computer systems used in regulated operations. PharmTechnol 1997.

Guide to Inspection of Computerized Systems in Drug Proces-sing, Reference Materials and Training Aids forInvestigators, Division of Drug Quality Compliance,Associate Director for Compliance, Office of Drugs,National Center for Drugs and Biologics, and Division ofField Investigations, Associate Director for Field Support,Executive Director of Regional Operations, Food and DrugAdministration, February 1983.

Technical Report No. 18, Validation of Computer-RelatedSystems. PDA committee on validation of computer-related systems. PDA J Pharm Sci Technol 1995;49(Suppl. 1).

General Software Quality ReferencesKaner C, Falk J, Nguyen HQ. Testing Computer Software. 2nd

ed. Vsn Nostrand Reinhold, 1993 (ISBN 0-442-01361-2).Ebenau RG, Strauss SH. Software Inspection Process. McGraw-

Hill, 1994 (ISBN 0-07-062166-7).Dustin E, Rashka J, Paul J. Automated Software Testing—

Introduction, Management and Performance. AddisonWesley Longman, Inc., 1999 (ISBN 0-201-43287-0).

Fairley RE. Software Engineering Concepts. McGraw-HillPublishing Company, 1985 (ISBN 0-07-019902-7).

Halvorsen JV. A software requirements specification documentmodel for the medical device industry. In: ProceedingsIEEE SOUTHEASTCON ’93, Banking on Technology, Char-lotte, North Carolina, April 4th–7th, 1993.

Mallory SR. Software Development and Quality Assurance forthe Healthcare Manufacturing Industries. InterpharmPress, Inc., 1994 (ISBN 0-935184-58-9).

Perry WE, Rice RW. Surviving the Top Ten Challenges ofSoftware Testing. Dorset House Publishing, 1997 (ISBN0-932633-38-2).

Wiegers KE. Software Requirements. Microsoft Press, 1999(ISBN 0-7356-0631-5).

618 VIII: COMPUTERIZED SYSTEMS