a cost quality model

87
A COST QUALITY MODEL FOR CMOS IC DESIGN by Sandeep Deshpande Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in Partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering APPROVED: Aeon 2 Ap Dr. S.F. Midkiff, Chairman 2. Online “at C= r. JR. Armstrong Dr. P.M. Athanas SO September, 1994 Blacksburg, Virginia

Upload: others

Post on 09-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A COST QUALITY MODEL

A COST QUALITY MODEL

FOR CMOS IC DESIGN

by

Sandeep Deshpande

Thesis submitted to the Faculty of the

Virginia Polytechnic Institute and State University

in Partial fulfillment of the requirements for the degree of

Master of Science

in

Electrical Engineering

APPROVED:

Aeon 2 Ap Dr. S.F. Midkiff, Chairman

2. Online “at C=

r. JR. Armstrong Dr. P.M. Athanas SO

September, 1994

Blacksburg, Virginia

Page 2: A COST QUALITY MODEL

Ce

Lv

Si95 V8 S55 3 te

R474 Cia

Page 3: A COST QUALITY MODEL

A COST QUALITY MODEL

FOR CMOS IC DESIGN

by

Sandeep Deshpande

Dr. S.F. Midkiff, Chairman

Electrical Engineering

(ABSTRACT)

With a decreasing minimum feature size in very large scale integration (VLSI)

complementary metal oxide semiconductor (CMOS) technology, the number of transistors

that can be integrated on a single chip is increasing rapidly. Ensuring that these extremely

dense chips are almost free of defects, and at the same time, cost effective requires

planning from the initial stage of design. This research proposes a concurrent

engineering-based design methodology for layout optimization. The proposed method for

layout optimization is iterative, and layout changes in each design iteration are made based

on the principles of physical design for testability (P-DFT). P-DFT modifies a design such

that the circuit has fewer faults, difficult to detect faults are made easier to detect, and

difficult to detect faults are made less likely to occur.

To implement this design methodology, a mathematical model is required to evaluate

alternate designs. This research proposes an evaluation measure: the cost quality model.

The cost quality model extends known test quality and testability estimation measures for

gate-level circuits to switch-level circuits. To provide high fidelity in testability estimation

and reasonable CPU time overhead, the cost quality model uses inductive fault analysis

techniques to extract a realistic circuit fault list, Inne test generation techniques to generate

tests for these faults, statistical models to reduce computational overhead due to test

generation and fault simulation, yield simulation tools, and mathematical models to

estimate test quality and costs. To demonstrate the effectiveness of this model, results are

presented for CMOS layouts of benchmark circuits and modifications of these layouts.

Page 4: A COST QUALITY MODEL

To Mama and Daddy, for all your love and support

iii

Page 5: A COST QUALITY MODEL

Acknowledgments

I want to thank god for giving me the patience and determination to complete this work.

I would like to thank my advisor Dr. S.F. Midkiff for his invaluable advice and guidance

during my studies at Virginia Tech. His grasp of the area has never failed to amaze me. I

would also like to thank him for painstakingly editing this thesis, often spending evenings

and weekends doing it.

I would also like to thank Dr. J.R. Armstrong and Dr. P.M. Athanas for serving on my

committee, taking the time to review this work and for being such wonderful teachers. I

would also like to thank Dr. B.S. Blanchard, Dr. M. Abrams and Dr. R. Broadwater for

making my stay at Virginia Tech, such a stimulating and rewarding academic experience.

I would like to thank Salah for making available his fault simulator. I would also like to

thank all my friends here at Virginia Tech who made my stay in Blacksburg so enjoyable.

A special thanks to my parents, grandparents and brother for all their love,

encouragement, and support during these difficult years.

iv

Page 6: A COST QUALITY MODEL

Table of Contents

Chapter I. Introduction ................ Lek e

1.1.

1.2.

1.3.

1.4.

Chapter 2

2.1.

2.2.

2.3.

2.4.

2.5.

2.6

2.7

. Background

Motivation

Approach ............2.-. 004000074

Contributions of this Work ..............

Outline of the Thesis ........2....20.2022.

The Systems Approach and Design Evaluation

Physical Design for Testability .........2..

Inductive Fault Analysis... ..........202.

Testing 2... 2 ee ee

2.4.1. The Fault Model ......

2.4.2. Circuit Representation ............

2.4.3. Ippe Testing Ce ee ee ee ee kk ee

2.4.4. Test Generation for Bridging Faults .... .

Statistical Models for Testability Estimation .... .

2.5.1. The Bayesian Model ...........

2.5.2. The Beta Model .. . Dk.

Yield Analysis. ............0..

2.6.1. Analytical Yield Models ...........

* © © «© «¢ +

- 8 e 8

oe es © ee 8 «

* 2 © © © ©

- = © © 8 6

7 8 © © ee «

-_ © © © 8 «6

e 8 © © 8 &

o © #© © © «©

. .

*

o © © © «# «@

es © © © © «©

2.6.2 WVLASIC: Simulation-Based Model and Tool...

Test Qualityand Cost ........... ...0..

2.7.1 Test Quality .

2.7.2. Test Cost ....... Ck ke nk ne ek ee kk ee ne

- © © © @ 6

Page 7: A COST QUALITY MODEL

Chapter 3. The Circuit Cost Quality Model

3.1. Design Method and Evaluation ....... Lo ee ee

3.1.1. Integrated Approach to Circuit Design

3.1.2. Design-Dependent Parameters for Layout Design. . . . .

3.1.3. Cost Quality Model for Evaluation ............

3.1.4. Estimation of the Model Parameters

3.1.4.1. Testability Estimation. ..........

3.1.4.2. Cost Estimation ...............2..

3.2. Supporting Tools ............ 2.0.2... 002020408

Chapter 4. Results ..... Le ee ee

4.1. Undetectability Profiles .. .........

4.2. Yield Analysis ..... 2... 2. ee ee ee ee

4.3. Layout Evaluation Using the Cost Quality Model ........

4.4. Summary ... 2... ee ee ee eee

Chapter 5. Conclusion ... . . Lo ee

5.1. Summary ..... 2... ee ee ee

5.2 Directions for Future Research .......2....

Bibliography © 000

Appendix A. CARAFE and VLASIC Technology Files ..........

Appendix B. Fabrication and Defect Statistics Files ...........2..

Vita

. . 33

33

. 33

36

.. 37

. 38

39

42

48

A8

50

54

63

66

67

66

68

76

78

vi

Page 8: A COST QUALITY MODEL

List of Illustrations

Figure 1.

Figure 2.

Figure 3.

Figure 4.

Figure 5.

Figure 6.

Figure 7.

Figure 8.

Figure 9.

Figure 10.

Figure 11.

Figure 12.

Figure 13.

Figure 14.

Figure 15.

Figure 16.

Figure 17.

Figure 18.

Figure 19.

Figure 20.

IC layout design flow. .. 2... ee ee ee

The systems engineering process applied to IC design.

Inductive fault analysis flow diagram... ..............

Intra-layer bridging fault... 2... 2 2. ee eee

Inter-Layer bridging Fault... 2... 2... ee ee.

Hierarchical circuit representation. . .

Ipnpg test generation suite... 2... 2...

Flowchart for deterministic test pattern generation. .........

Relationships between yield, design, manufacturing process and testing

VLASIC system structure... 2... ee ee ee ee

Block diagram of the testing process showing effect of errors. .

Design parameters in the layout design process. . . . .

Cost quality estimation tool suite... ............20..

Flattening a hierarchical MAGIC circuit... 2... 2.0.2...

Circuit undetectability using the Bayesian model... ........

Circuit undetectability using the Beta model. . . .. .

Undetectability profiles using the entire fault list. . 2... 2...

Undetectability profiles using a random sample of the fault list. .. . .

Undetectability profiles of MCNC layouts... . .

Undetectability profiles of MCNC layouts using the entire fault list.

oe © 8 © 8 © © © © 8 6

« * . .

* 6 © © © #© © 6

vii

Page 9: A COST QUALITY MODEL

List of Tables

Table 1.

Table 2.

Table 3.

Table 4.

Table 5.

Table 6.

Table 7.

Table 8.

Table 9.

Table 10.

Table 11.

Table 12.

Table 13.

Cost Model Definitions ............0....2.2....000. AQ

Programs Written for this Research. W. 4

Yield Estimates for MCNC Layouts ................ . 354

CCQ for MCNC Layouts Using the Entire Fault List... . 2... . 56

CCQ for MCNC Layouts Using a Random Sample of the Fault List. 57

Comparison of Circuit Fault Coverage Using Different Estimation

Models... 1... ee 58

Maximum Defect Size in the Fabrication Process. ........ .. 39

CCQ for c17 with Different Layouts and Fabrication Statistics Using the

Entire Fault List. 2... ee 61

CCQ for c17 with Different Layouts and Fabrication Statistics Using a

Random Sample of the Fault List... 2... ee 61

CCQ for c432 with Different Layouts and Fabrication Statistics Using the

Entire Fault List. 2... ee ee ee 62

CCQ for c432 with Different Layouts and Fabrication Statistics Using a

Random Sample of the Fault List... .......2.....0.04. 62

CCQ for c3540 with Different Layouts and Fabrication Statistics Using the

Entire Fault List... 2... ee ee 64

CCQ for 3540 with Different Layouts and Fabrication Statistics Using a

Random Sample of the Fault List... 2... 2... ee 64

Vill

Page 10: A COST QUALITY MODEL

Chapter 1. Introduction

1.1 Motivation

In the competitive integrated circuit (IC) market, an IC vendor has three major objectives:

1. identify a market need,

2. design a chip in the shortest possible time, and

3. ensure that the number of bad chips shipped to the customer is minimized, i.e. ensure

that the chips shipped are of high quality.

In complementary metal oxide semiconductor (CMOS) technology, the minimum feature

size is currently below one micron, so millions of transistors can be integrated on a single

chip. With these high levels of integration, shipping chips with a low defect level requires

consideration of circuit testability, manufacturability, and cost efficiency throughout the

design cycle.

To develop a design that best meets the specifications for functionality, test,

manufacturability, and cost, an iterative process must be carried out. At each iteration, the

design is evaluated to determine the extent to which it meets the specifications. The

detailed design phase consists of two parts: gate-level circuit design and incorporation of

gate-level design for test (DFT) techniques, and layout modification to incorporate

physical design for testability and optimize yield. This research develops a design method,

including an evaluation measure, to be used in the final layout phase of IC design.

Chapter 1. Introduction 1

Page 11: A COST QUALITY MODEL

1.2. Approach

In the life-cycle approach to engineering design, the entire life of the system is considered

from inception [4]. This research applies this approach to IC design. The steps in the IC

product life cycle using the systems approach are:

1. identification of the market need for a particular circuit as a packaged IC,

2. conceptual and preliminary design of the IC,

3. detailed circuit design,

4. production and test of the [C, and

5. consumer use.

To ensure product competitiveness, the number of defective products shipped should be

minimal, i.e. the quality of product test should be high. To ensure high test quality in the

production phase (step 4 above) of the life cycle, the detailed design stage (step 3) must

incorporate techniques that improve testability. These include circuit modification to

incorporate design for testability (DFT) techniques and reduce test costs.

A systems approach, based on [4], to the layout stage in IC design is shown in Figure 1.

The “Design/Modify Layout” block is repeated until the layout meets or exceeds the

required specifications, or it is determined that the specifications cannot be satisfied. To

evaluate and compare alternative designs, an evaluation metric is needed. This research

develops a metric to evaluate between alternative circuit layouts. Computer-aided design

(CAD) tools are used to extract realistic faults and generate tests for those faults. Using

these physically realistic faults ensures that the test patterns generated in the design stage

and used in the production stage detect all, or almost all, of the defects that can occur

during fabrication. This research also investigates statistical methods for testability

estimation to reduce test generation and fault simulation costs.

Chapter 1. Introduction 2

Page 12: A COST QUALITY MODEL

Gate-level

Design

Functional and

Parametric

Requirements

v

Design/Modify

Layout _

Technology Identify Strategic Factors

Evaluate

Layout

Improvement Needed

Acceptable Effectiveness

Final Layout

Figure 1. IC layout design flow.

Chapter 1. Introduction 3

Page 13: A COST QUALITY MODEL

1.3. Contributions of this Work

The contributions of this work include:

development of a concurrent engineering-based design method for IC layout,

development of a layout evaluation measure, the cost quality model, to be used in this

design method,

statistical determination of circuit testability for faults and circuits described at the

switch-level of abstraction, and

adaptation of test quality concepts used at the gate-level to switch-level fault and

circuit models.

1.4. Outline of the Thesis

Chapter 2 discusses the tools and concepts used in this research, including the systems

approach, physical design for testability, inductive fault analysis, switch-level fault models

and test generation for those faults, statistical testability estimation, yield modeling and

test cost and quality. Chapter 3 presents the proposed design method and circuit cost

quality evaluation metric. Chapter 4 presents results for ISCAS °85 benchmark circuits [7]

using the proposed model. Conclusions are provided in Chapter 5.

Chapter 1. Introduction 4

Page 14: A COST QUALITY MODEL

Chapter 2. Background

It is the aim of the IC designer and manufacturer to produce an IC which not only satisfies

all stated functional and parametric requirements, but is also cost-effective to both the

vendor and to the customer. It is with this in mind that the life-cycle complete concurrent

engineering approach to IC design is adopted in this research. Using this systems

approach, IC design is an iterative process as shown in Figure |. Alternative designs are

created, evaluated for functional and cost effectiveness and then the decision to redesign

or accept the present design is made based on the evaluation. This research proposes the

use of physical design for testability principles to modify the design. Physical design for

testability is a technique that can reduce circuit faults or make faults more testable through

layout modifications.

To ensure with near certainty that the product shipped is fault-free, ie. has high test

quality, all the realistic faults that can occur in the circuit should be tested. Prior research

[10,26] has shown that a switch-level circuit model is needed to accurately model the

physically realistic faults in a circuit taking into account the physical layout and the

process fabrication statistics. Using this list of realistic faults, test patterns can be

generated to detect the faults. The tests generated using the list of physically realistic

faults provide high defect coverage and improve the quality of the shipped product.

Realistic fault modeling and test generation result in high test quality, but, CPU time costs

due to this can be high. For large VLSI circuits, fault simulation time and test generation

time would constitute a considerable overhead in the iterative design process proposed.

Chapter 2. Background 5

Page 15: A COST QUALITY MODEL

The use of statistical models is explored in this research to reduce this overhead. The

manufactured product yield, which is a function of the fabrication process and the circuit

layout, is also a significant cost factor considered in this work.

This chapter presents a survey of literature related to research in the systems approach and

design evaluation, physical design for testability, inductive fault analysis, testing, statistical

estimation of circuit testability, yield analysis and test cost and quality.

2.1. The Systems Approach and Design Evaluation

The systems approach to product development considers effectiveness measures and

economic feasibility through the entire life cycle of the product. A flow chart of the

systems engineering process as applied to IC design is shown in Figure 2. In this iterative

process, alternative designs are evaluated against the previous iteration. This evaluation

depends on the design-related parameters. In [C design, physical design for testability

principles can be used to create alternate circuit layouts.

The factors that stand in the way of attaining design objectives are known as limiting

factors [4]. Of these limiting factors, some can be modified to achieve a more optimal

design. These factors are known as the strategic factors. The evaluation function provides

a means of evaluating and comparing design alternatives created by making changes in the

strategic factors. Blanchard and Fabrycky [4] define F as an evaluation measure to choose

between alternatives.

E= f(X,Y,,¥;) (1)

X denotes the controllable decision variables, Y; denotes the design-dependent system

parameters, and Y; denotes the design-independent system parameters. In IC design the

design-dependent parameters include yield, test cost, number of faults, circuit testability

and, thus, test quality. The design-independent parameters include fabrication

Chapter 2. Background 6

Page 16: A COST QUALITY MODEL

Identify Need for IC

|

Evaluate and Test

the IC Design

Compare

Define the IC : Evaluation and Requirements est Data wi

Requirements

and Objectives

Vv

Consider Choose the Best

Alternative Configuration Configurations

Design the

- IC

Manufacture

the IC

y Test the IC

Market the

IC

Figure 2. The systems engineering process applied to IC design (based on [4]).

Chapter 2. Background

Page 17: A COST QUALITY MODEL

process statistics and fixed costs, such as test equipment costs, design personnel costs,

support costs and prototype manufacturing costs.

2.2. Physical Design for Testability

Most current DFT methods employ gate-level circuit additions and modifications to

enhance circuit testability. Gate-level DFT techniques make faults more observable and

controllable. However, even after the incorporation of these techniques, faults may remain

that are difficult to test. Prior research in inductive fault analysis (IFA) led to tools that

extract faults at the layout or the physical level. Using these tools, it is possible to reduce

the number of realistic circuit faults through layout modifications.

Using IFA tools like CARAFE (discussed in Section 2.3), it is possible to generate a list of

all of the possible realistic faults in a circuit. Tests can then be generated for the faults.

Based on the information from fault simulation and test generation, the physical design of

the circuit can be modified to remove faults that are not testable or difficult to test. This

approach to design for testability is known as physical design for testability (P-DFT).

P-DFT criteria include [12]:

1. design the circuit to have fewer faults,

2. make difficult to detect faults easy to detect, and

3. make difficult to detect faults unlikely.

The difficulty of detecting a fault f, Diff; , is

_ all tests that detect f Diff, = Yall possible tests

The difficulty of detecting a fault is affected by whether the fault is a delay fault or a logic

fault. Delay faults are more difficult to detect, hence P-DFT techniques can modify the

Chapter 2. Background 8

Page 18: A COST QUALITY MODEL

circuit so as to decrease the likelihood of a delay fault, even though the likelihood of an

easier to detect logic fault may be increased. Similarly, some logic faults are difficult to

test and can be converted to delay faults to increase the detection probability. The difficult

to detect faults can be made less likely by increasing node spacing and wire spacing for

those wires and nodes which if shorted create difficult to detect faults. P-DFT techniques

modify cell placement, routing and logic selection (drive strength) to make a circuit more

testable.

2.3. Inductive Fault Analysis

To ensure that the test vectors generated detect almost all possible defects, the chosen

fault model should be able to represent the physically realistic defects in the IC. Inductive

fault analysis [10] is a systematic and automatic method for determining which faults are

likely to occur in a specific VLSI circuit. IFA analyzes low-level fault producing

mechanisms, as opposed to using a high-level fault model, and takes into account

fabrication technology, defect statistics, and physical layout. The faults are modeled at the

switch level. The IFA procedure, as described in [10], is shown in Figure 3. As is seen in

the diagram, the inputs are the technology description, the circuit layout and the defect

Statistics.

CARAFE [14,15] is an IFA tool that simulates all possible circuit defects and translates

the generated defects into faults. CARAFE enumerates the realistic bridging faults that can

occur in a circuit as a result of defects and calculates the likelihood of occurrence of each

fault based on the physical layout of the circuit. This likelihood is reported relative

Chapter 2. Background 9

Page 19: A COST QUALITY MODEL

Technology Technology Primitive Fault

Description Analysis Taxonomy

vv

Circuit Layout Layout Data

Layout Parsing Structures

4

Defi cet Defect Defects Statistics ;

Generation

Primitive Primitive

Fault > Fault List

Extraction

Ranked Fault

List

Chapter 2. Background

Figure 3. Inductive fault analysis flow diagram [10].

16

Page 20: A COST QUALITY MODEL

to the likelihood of occurrence of the other faults found. CARAFE recognizes two types

of bridging faults:

1. intra-layer bridging faults, as illustrated in Figure 4, caused by extra conducting

material deposited or not etched between physically adjacent conducting layers, and

2. inter-layer bridging faults, as illustrated in Figure 5, caused by a defect in the insulation

material (oxide layer) between two conducting layers causing a contact to be made.

A bridge is a pair of shorted circuit nodes. A bridge fault within a cell, ie. an intra-cell

bridge fault, can cause a change in the output logic value, an indeterminate logic value at

the output, an increase in the propagation delay for some inputs, or sequential behavior in

a combinational logic cell. To extract the bridging faults, CARAFE uses physical layout

geometry and defect statistics to calculate a sensitive area to determine the likelihood of

occurrence of a fault. The sensitive area, SA, is defined as the area in which the center of a

defect should fall to cause a fault.

SA= I(2r-w)

Here, r is the radius of the defect, w is the spacing between two conducting regions, and /

is the distance for which the two conducting regions are adjacent. Radius r is specified in

the fabrication statistics file, while dimensions / and w are determined by CARAFE. The

likelihood of occurrence of the bridging faults reported by CARAFE depends on the

sensitive area calculation and the probabilities of bridging fault occurrence between the

different layers.

2.4. Testing

One of the objectives of this research 1s evaluation of test quality and cost. The fault

model chosen and the test procedure contribute to both test quality and test cost. This

section discusses the choice of a fault model for test generation for CMOS VLSI circuits,

Chapter 2. Background 11

Page 21: A COST QUALITY MODEL

Conducting Extra

Regions Conducting

Material

Figure 4. [ntra-layer bridging fault.

Conducting

Layer 2

Conducting

Oxide Defect Layer |

Figure 5. Inter-layer bridging fault.

Chapter 2. Background 12

Page 22: A COST QUALITY MODEL

circuit representation and test generation. This section also discusses BODEM [5,6], a

hierarchical switch-level automatic test pattern generation (ATPG) program for CMOS

circuits.

2.4.1. The Fault Model

A fault model is a hypothesis of the fault mechanism present in a circuit [20]. Faults

should model physical defects in a circuit. A single fault may model one or more defects in

the circuit. The stuck-at fault model is a widely accepted fault model and stuck-at fault

coverage is widely used to estimate the defect coverage in an integrated circuit [18]. The

stuck-at fault model is, however, inadequate for test generation for CMOS circuits

because of the different physical structure of the CMOS gate compared to a TTL gate for

which the stuck-at model was developed, different failure mechanisms in CMOS elements,

and the inability to represent CMOS structures such as complex gates and transmission

gates as logical gates |5,20,26,28].

Common CMOS faults include a bridge between two normally unconnected nodes and an

open between two normally connected nodes. Shorts within a gate (intra-gate faults) and

shorts between gates (inter-gate faults) also need to be modeled. The fault types proposed

for the physically occurring failure mechanisms in CMOS circuits are:

e node stuck-at 0 fault,

e node stuck-at 1 fault,

e transistor stuck-off,

e transistor stuck-on,

e general open fault, and

e general bridging fault.

It should be noted that node stuck-at faults and transistor stuck faults are subsets of

general bridging faults. For example, a line stuck-at 0 is equivalent to a short between that

line and ground. A transistor stuck-on is equivalent to a bridge between the

Chapter 2. Background 13

Page 23: A COST QUALITY MODEL

transistor’s drain and source. Research has shown that the use of gate-level circuit models

with the line stuck-at fault model does not accurately model actual defects in a CMOS

circuit [20]. Hence, gate-level circuit representations may not provide a high level of

defect coverage. CARAFE provides a list of physically realistic switch-level bridging faults

for the circuit. This bridging fault model at the switch-level of circuit abstraction has been

chosen for BODEM [5,6] which is the test generation tool used in this research.

2.4.2. Circuit Representation

An important consideration in test generation is circuit representation. Circuits modeled at

the switch level are commonly represented in the form of a graph. For large CMOS

circuits represented as a graph, the memory requirements and computational complexity

pose a formidable problem. Hence, hierarchical techniques have been used in circuit

representation. The graph-based hierarchical circuit representation used in [5] is shown in

Figure 6, At the lowest level in the hierarchy are the interconnected transistors and wires

that are represented by nodes and edges. The next level consists of modules that are

switch-level graphs consisting of connected nodes and edges. Each module has external

nodes that provide an interface between connected modules. These modules lack logical

information, so gate-level information is added to each module. At the highest level, a

network module defines the primary inputs and the observable outputs of a circuit. A

circuit consists of module instances. An instance defines connections between adjacent

modules. Each instantiation requires storage to be allocated only for instance connectivity

and the external nodes of each module. No storage is allocated to the nodes within a

module. This approach conserves memory and reduces computational overhead.

The hierarchical circuit representation developed in [5] uses three types of gates to

provide logical information to the modules:

1. primitive gates which include NAND gates, NOR gates and INVERTERS,

Chapter 2. Background 14

Page 24: A COST QUALITY MODEL

2. complex gates which include AND-OR-INVERT and OR-AND-INVERT, and

3. composite gates which include inverted output AND and OR gates and XOR and

XNOR gates.

Network Module

Module Instances

Nodes and Edges

Figure 6. Hierarchical circuit representation.

2.4.3. Ippa Testing

An important feature of CMOS structures is that in normal operation they consume very

little power. By taking advantage of this low static current characteristic of CMOS,

quiescent power supply current (Ippo) measurement techniques can be effectively used to

detect shorts or bridges in a circuit [2,21]. If the shorted nodes are driven by the test

vector to opposite logic values, then the resulting path due to the short causes a high

quiescent current to flow that can be observed and measured on the power supply bus.

Chapter 2. Background 15

Page 25: A COST QUALITY MODEL

2.4.4. Test Generation for Bridging Faults

BODEM [5,6] is an automatic test pattern generator that performs deterministic test

pattern generation, fault simulation and test set compaction. It assumes Ippg testing and

hierarchical switch-level circuit models. The complete test generation suite is shown in

Figure 7. The circuit layout is described using MAGIC [23]. For switch-level test

generation, switch-level bridging faults are extracted using CARAFE [14,15]. Since

CARAFE flattens a hierarchical circuit, the hierarchical output of the MAGIC circuit

extractor (circuit.ext) file is used.

Using the hierarchical circuit description extracted from MAGIC and a bridging fault list

from CARAFE, bridging fault classification and switch-level module extraction is

performed by CARP (CARAFE Output Processor). CARP filters out untestable bridging

faults due to unused feed-through nodes, combines duplicate faults, adjusts their

occurrence counts, and then re-ranks the fault list. Also, CARP creates a modular,

hierarchical circuit representation in a switch-level description (SDL) language. The

bridging fault list (circuit.bf) file and the switch-level description (circuit.sdl) files are

inputs to BODEM which implements the test generation algorithm based on Ippg testing.

BODEM (Bridge Oriented Decision Making) is so called because it extends PODEM

(Path Oriented Decision Making) for use in Ippo test generation at the switch level.

2.5. Statistical Models for Testability Estimation

Test generation and fault simulation have high execution time and computing resource

costs associated with them. Statistical models can be used to reduce these costs by

estimating circuit testability using only a small sample of the circuit faults. This section

presents two models that can be used to estimate circuit testability and fault coverage: the

Bayesian model [25] and the Beta model [9]. The Beta model provides better estimates of

fault coverage than the Bayesian model [9].

Chapter 2. Background 16

Page 26: A COST QUALITY MODEL

CARAFE

Chapter 2. Background

MAGIC

CARP

BODEM

circ.vectors

Figure 7. Ippg test generation suite [5].

17

Page 27: A COST QUALITY MODEL

To generate a test, a deterministic ATPG program performs the steps shown in Figure 8.

In this method, CPU costs due to test generation and fault stmulation predominate. Circuit

testability is defined as the probability density function of the detection probabilities of the

faults in the circuit. The quality of a test depends on the fault coverage achieved by the

test. Fault coverage itself is a function of circuit testability. The relation between testability

and fault coverage developed by Seth, Agrawal, and Farhat [25] can be used to reduce

fault simulation and test generation costs. The concept of circuit undetectability, which is a

scalar representation of testability is also developed in [25].

2.5.1. The Bayesian Model

The model presented in this section, referred to as the Bayesian model, is due to Seth,

Agrawal, and Farhat [25]. In the Bayesian model, a probabilistic relationship is developed

between testability and fault coverage. Data collected during fault simulation and test

generation is used to estimate testability.

The detection probability of a fault is defined as the probability of detecting the fault with

a random vector. It is represented by a probability distribution with density function p(x).

The mean coverage by a vector is

I

y, = J xp(x)dx

0

It is shown in [25] that the coverage due to n vectors is

y, =1-I(n) (2)

Chapter 2. Background 18

Page 28: A COST QUALITY MODEL

Fault List

Select a Target

Generate a Test

|

.

Perform Fault

Simulation

Is Fault

Coverage High

Enough ?

Done

Figure 8. Flowchart for deterministic test pattern generation.

Chapter 2. Background

Page 29: A COST QUALITY MODEL

where I(n), the undetectability profile of the circuit, is given by

I I(n) = J (1 -x)"p(x)dx (3)

0

Equations (2) and (3) are valid for random test generation. For deterministic test

generation the model assumes that every vector detects at least one new fault not

previously covered, and that a vector may also detect other faults depending on their

detection probability. For deterministic test generation, which is used in this work, the

fault coverage is [25] n

Yn = I= Tn) (4)

To determine p(x) and /(n) experimentally, a set of n, faults is simulated. A fault is

dropped from further consideration by the fault simulator as soon as it is detected. A

random first detection (RFD) variable is associated with each fault. The RED variable

indicates the vector number at which the fault was first randomly detected. Since random

detection is required, the RFD value of a targeted fault in deterministic test generation 1s

not affected by the vector generated. For each vector number i, variable w; represents the

number of faults whose RFD value is 7. The number of faults in the fault set whose RFD

value is not defined is given by

Wy =n, -— Lw, (5)

where N is the number of test vectors. Using Bayesian detection probability, p(x) and /(n)

are given by

p(x) =— Sw,p,(a) 6) Ki. i=0

w(N+D 1X iG+Dw, and [(n) = —————~ OTT

n(n+N4+1) vn, iai(nti)\(nt+i+l) (7)

where p(x) is the fault detection probability for vector i.

Chapter 2. Background 20

Page 30: A COST QUALITY MODEL

2.5.2. The Beta Model

The Beta model is due to Farhat and From [9]. According to the Beta model, the

testability probability function p(x) has the distribution

_ _ r(a+B) ., Bel p(x) = Pd(x) + (1 POT (By (1-x)"") (8)

p(x) = Py 8x) +1 — Py) pg (x) OSxS1,a>0,B >0 (9)

where p(x) is the detection probability distribution of a fault, [’(-) is the gamma function, 6

(x) is the impulse function at x=0, P, is the fraction of aborted and redundant faults in the

circuit, and p,(x) is the beta component of the testability distribution. Testability is

modeled as a function of a discrete impulse function at zero and a continuous beta

distribution. The impulse function is used to model the redundant faults in the circuit.

For the Beta model, the undetectability profile is

I, = 1(n,a,B)

r Tr

r(B)rin+a+B)

where n is the number of vectors. /p(n) can be simplified to obtain

n (n—j+B) I = ]| ——_————— 11 = op (11)

The overall undetectability profile is

I(n) =P, +(1-P, JJ j=i(n— f+at+B)

I(n) =P, + — Fy dy Cn) (12)

Knowing Po and parameters @ and £, the testability profile and the fault coverage can be

obtained. For deterministic fault coverage, the testability parameters are estimated by

applying deterministic tests to a random sample of faults. The following procedure is

adopted to estimate Py for deterministic test pattern generation [9].

Chapter 2. Background 21

Page 31: A COST QUALITY MODEL

1. Generate an initial random sample of the fault list and generate N vectors.

2. Form a different random sample of size n with faults fj,f:,...,.f. and stimulate the NV

vectors using the new sample of faults.

3. Associate with each fault f,, a counter x;, where x; is the number of times out of N that

the fault f; is detected. Let X,, X>,.... X, be the actual detection counts. From the faults

with detection count equal to zero, P, the sum of the redundant and the aborted

faults, is estimated and used as an estimate for Po .

Parameters & and B can be estimated based on the conditional distribution of X;. a and B

can be determined using the moment estimator method or the iterative maximum

likelihood estimator method. In this research the undetectability profile is used as a figure

of merit for a particular layout, so the accuracy provided by the iterative maximum

likelihood estimator is not necessary. The moment estimator method used determines

o and 8, estimates for o and £, respectively, using the following equations.

A NX -X-m, a= => — Nm, -(N-1)X -NX

Nex” —NXm, —~NX° + Xm, B=—— = NXm, -(N-1)X —NX

N is the total number of vectors and

My - $x?) nh, i=]

¥=13x, |

AY

(13)

(14)

These values are substituted into Equation (12) to get 1(n,o..B ), the moment estimate

of I(n,a,B).

Chapter 2. Background 22

Page 32: A COST QUALITY MODEL

2.6. Yield Analysis

Yield is the fraction of parts produced that are free of all process and logic defects. Yield

is, therefore, an important cost factor in IC manufacturing. Yield depends on the process

defect density in the different layers, circuit layout, and circuit packaging. Many analytical

models have been proposed to estimate yield [13,17], some of which are discussed in this

section. Monte Carlo simulation methods are also used in yield estimation [29]. They

accurately model the actual physical phenomena that produce defects, but have high CPU

time overhead and cannot be used for large VLSI circuits. This section discusses VLASIC

[29,30], a yield simulation tool that uses Monte Carlo techniques.

Yield modeling and circuit testing are interrelated. This relation is shown graphically in

Figure 9. Defects in an IC consist of global defects and local defects. The circuit faults

caused by the two kinds of defects are not independent. However, it is generally true that

global defects cause a reduction in the parametric yield, ie. they cause variation in

switching speed and output voltages and currents, while local defects generally affect

topology and cause functional defects. Maly [17] identifies three important features of any

yield model.

1. Fidelity of the model: a measure of how accurately the model describes actual physical

phenomena causing yield losses.

2. Complexity of the model: the CPU time increase caused by an increase in the size of

the IC.

3. Dimensionality of the model: the number of variables that must be identified before the

model can be used.

2.6.1. Analytical Yield Models

Many analytical models have been proposed for yield estimation [13,17]. Some of these

are described in this section.

Chapter 2. Background 23

Page 33: A COST QUALITY MODEL

Characteristics of Contaminants

and Particulants

Process Recipe

Process Instabilities

\ vy Vv

Defect Characteristics

4

% v

r

Faults

IC Geometry + IC Layout

Product Characteristics

Performance

Reliability

Selection of Chips

Y Yield

Figure 9. Relationships between yield, design, manufacturing process and testing [17].

Chapter 2. Background 24

Page 34: A COST QUALITY MODEL

The basic yield equation, based on Poisson statistics, is

Y=e%

where Y is the yield, A is the IC die area, and Dy is the defect density. Moore [19] modeled

IC yield using the formula

A

Y=e

where Ag is the reference area. Based on the assumption that the defect density is

exponentially distributed, Seeds [24] derived the yield formula

y=— 1+ AD,

Okabe [22] proposed a two dimensional yield model based on the negative binomial

distribution,

r Y =( 4+ 1)”

Oo

where Ay = AD, and o is the clustering parameter.

These simple analytical models do not account for the details of the chip layout and use

defect density as a model parameter. They assume that fault density is proportional to

defect density. Thus, these models have low fidelity. The need for improvement arises due

to the need to incorporate defects that can occur in different IC layers. A more recently

proposed analytical model by Kooperberg [16] is more general.

N A,D;P, _ Y=[[d+2—=—)*

i=] j Cc. i

Here j is the ith type of defect, j is the jth module, P; is the probability that defect 7 will

cause a fault in area j, and c; is a constant relating to the density of the ith defect type.

The analytical models presented in this section, with the exception of the one proposed by

Kooperberg, have poor fidelity and low complexity and dimensionality. The model

proposed by Kooperberg has better fidelity than the earlier proposed models; complexity

and dimensionality are also higher than for the simple analytical models.

Chapter 2. Background 25

Page 35: A COST QUALITY MODEL

2.6.2. VLASIC: Simulation-Based Model and Tool

VLASIC (VLSI Layout Simulation for Integrated Circuits) [29,30] is a yield simulator

based on Monte Carlo simulation techniques that uses process technology and defect

Statistics to place random spot defects on a chip layout and determine what circuit faults, if

any, occur. VLASIC has been developed for determining functional yield. The WLASIC

system structure is shown in Figure 10. There are six basic steps:

1. generate as many samples as desired in the simulation,

2. generate and place defects on the layout,

3. analyze the modified layout for circuit faults,

4. filter out faults that do not affect the functional yield and combine faults,

5. output a chip sample containing a list of circuit faults that have occurred, and

6. on completion of simulation for all the samples, generate chip fault lists and the

frequency of occurrence of the faults.

The defects placed by VLASIC on the layout are:

e extra material defects which include shorts created by extra material deposition, new

devices created if extra polysilicon spans active material creating a transistor, and

opens if extra polysilicon breaks an active line in forming a new transistor,

e missing material defects,

¢ oxide pinhole defects, and

e junction leakage defects.

VLASIC uses statistical models for defect size distribution, spatial distribution, radial

distribution and distribution between lots and wafers to place the defects on the chip.

VLASIC has high fidelity, but this fidelity is accompanied by high complexity and

dimensionality.

Chapter 2. Background 26

Page 36: A COST QUALITY MODEL

Pr oes Extracted Defect

Description Layout Statistics

Vv Vv

>) Geometry Random Number Database

Generators

¥

Polygon Sample Loop Package

A

' Fault

»} Combination and Fault Summary Filtering

;

| Vv

. Fault Samples Out _| Analysis

Figure 10. VLASIC system structure [29].

Chapter 2. Background 27

Page 37: A COST QUALITY MODEL

2.7. Test Quality and Cost

Layout effectiveness is a function of the associated test cost and quality. Test cost and

quality are related to each other by the cost quality metric developed in this research. For

a linear increase in circuit complexity, the cost of test increases faster than linearly. Over

the past few years, product cost has decreased even with an increase in circuit complexity.

However, test cost as a percentage of the total product cost has increased considerably

|8]. No fabrication process is perfect, so defective parts are produced. Testing is not

perfect either due to fault redundancy and economic limitations in the testing process.

Therefore, parts that are shipped may contain defects. This section discusses test quality,

test cost, fault coverage, errors in testing, and the resulting economic effects.

2.7.1. Test Quality

Testing in this work refers to Boolean testing which is the process of applying a sequence

of test patterns to a digital device and observing outputs with the objective of determining

whether the device functions correctly [18]. Test cost depends on the cost of test

equipment, the cost of fault simulation and test generation, and the cost for time required

for the actual test application.

Test transparency, TT, is the fraction of the defects not detected by the tests. The

complement of fault coverage is commonly used to estimate test transparency. Test cost

can be reduced if a higher test transparency is allowed, but test quality is reduced by

higher test transparency, so a tradeoff must be achieved between the required test quality

and the allowable test cost.

The process yield, Y, is the fraction of parts produced that are free of defects. It is not

possible to exactly determine this parameter since some defects may not be detected. Yield

Chapter 2. Background 28

Page 38: A COST QUALITY MODEL

can be estimated from the testing process, which gives an indication of the number of

defective parts in a batch.

The product quality level, QL, is the fraction of the good parts among the parts that pass

all tests and are shipped. Normally, manufacturers use a parameter known as the defect

level, DL, which is the complement of the quality level.

DL=1-QL

In the model developed by McCluskey and Buelow [18], Boolean testing is considered;

delay faults and other parametric faults are not considered. The process yield used should

be the Boolean yield, the fraction of parts produced that are free of all Boolean defects.

This model assumes that n point defects are present, each defect occurs with uniform

independent probability, and the test set detects m of the n possible defects. According to

this model

IT =1-" (15) n

It is shown in [18,33] that

QL=y"™ (16)

Hence, the defect level is given by

DL=]-y"™ (17)

Errors in the testing process can affect the defect level and effective yield. Two types of

errors can occur in the testing process. A Type / error occurs with probability @ when a

tester fails a chip that is not defective. A Type fH error occurs with probability B when a

chip is defective, but is not failed by the tester. Figure 11, from [32], shows the testing

process and the associated errors. Probability & is a function of test equipment quality,

while probability B is a function of the test coverage and the yield. The Type II error

probability B is given by [32]

Chapter 2. Background 29

Page 39: A COST QUALITY MODEL

Good

Type [I Errors Manufacturing

Defective

O Type I Errors

Figure 11. Block diagram of the testing process showing effects of errors [32].

yor —y = ——__—_— 18 j-y¥ (18)

The equations developed previously in this section for the defect level assume that the

probability for Type I errors is zero. However, IC manufacturers have found this to be

untrue in practice.

Chips that are failed as a result of testing often contain a number of good chips. Williams

and Hawkins [32] derive a relation for the defect level when there is a Type I error in the

tester. This defect level, denoted as Dlg, is given by

J—y™ DL. = = 19

“ JT-ay™ (19)

The fraction of the good chips among those failed, denoted as DF,, is given by [32]

Chapter 2. Background 30

Page 40: A COST QUALITY MODEL

oY DF, =————_ 20

“ J-yT + gy 20)

The models above are useful in estimating test costs for the producer and the customer,

some of which are discussed in Section 2.7.2

2.7.2. Test Cost

Test cost is an important component of chip cost for three reasons:

1. if a defective chip is not failed, i.e. a chip is insufficiently tested, the customer receives

defective chips which is considered a serious problem,

2. if a chip is tested thoroughly for all possible faults, the test costs can be quite high,

possibly making the chip non-competitive, and

3. if the tester fails good chips, effective yield decreases which again reduces the

competitiveness of the product.

Engineering compromises must be made to reduce test cost and, at the same time,

maintain test quality at the required level.

If n chips are tested, Williams and Hawkins [32] derive test costs for the manufacturer,

Cin, aS

Cm = nk; + nko P(DAF) + nk; P(DAF) + nk; P(D OF)

Cm = nky + nko(¥O" -Y) + nk -YO"") + nksaY (21)

where k; is the cost to test one chip, kz is the cost of a defective chip erroneously passed

by the tester, k; is the cost of a chip that has been failed by the tester and discarded.

P(DOF) is the probability that a defective chip is not failed, P(DAF) is the probability that

a chip is defective and is failed, and P(D MF) is the probability that a good chip is failed.

The first term in Equation (21) is the cost to test the n chips. The second term is the cost

of those defects discovered by the customer when a Type II error occurs. The third term

Chapter 2. Background 31

Page 41: A COST QUALITY MODEL

corresponds to adding value to defective chips that are discarded. The fourth term

corresponds to adding value to good chips that are discarded. These test costs to the

manufacturer and, in extension, to the customer are discussed further in Section 3.1.4.

Chapter 2. Background 32

Page 42: A COST QUALITY MODEL

Chapter 3. The Circuit Cost Quality Model

Determining a circuit layout that meets quality and test cost requirements, as part of the

system design process, requires an evaluation metric. The cost quality model developed in

this work is an evaluation metric that can be used for layout modification during detailed

design. The metric considers realistic faults modeled at the switch level, tests for bridging

faults, Ippg test, and yield. This chapter discusses the concurrent engineering approach to

IC design developed in this work, the evaluation model called the cost quality model, and

the suite of tools that have been integrated and adapted to implement the cost quality

model.

3.1. Design Method and Evaluation

The systems approach to IC design, as shown in Figure 1, is adopted in this work. This

research develops an evaluation measure, the cost quality model, that can be used to make

the redesign decision. This section discusses the design-dependent parameters that

influence the cost quality metric, the model adopted for circuit cost quality evaluation and

mathematical models used to estimate the parameters that influence this evaluation metric.

3.1.1. Integrated Approach to Circuit Design

Given a system concept, IC design consists of preliminary design and detailed design.

Preliminary design involves specifying functional and parametric requirements, identifying

the process technology to be used, and selecting the general design approach. In detailed

design, the circuit is designed in accordance to the specifications determined in preliminary

design. Detailed design includes both gate-level, or logic, design and layout design. Each

of these two stages requires its own design evaluation function. The evaluation function

Chapter 3. The Circuit Cost Quality Model 33

Page 43: A COST QUALITY MODEL

developed in this research is for the circuit layout stage. This research assumes that gate-

level design tradeoffs have been carried out prior to layout design. A flow diagram for

iterative detailed design of the layout, which can be supported by computer-aided design

(CAD) tools, is shown in Figure 12. In this work the “Modify Layout” block incorporates

P-DFT techniques and layout optimization, the “Estimation/Optimization” block estimates

design-dependent parameters given a near optimal test set, and the “Evaluate” block

computes the evaluation metric. The design-dependent parameters are enumerated in

Section 3.1.2.

3.1.2. Design-Dependent Parameters for Layout Design

There are four factors that need to be considered to optimize a design. Strategic

design-dependent parameters are associated with each factor which, if modified, may

change the numeric value of the evaluation measure. A list of design parameters associated

with each factor, as applicable to CMOS VLSI design, is given below. The list is not

exhaustive, but indicates most of the important design parameters. Note that the

parameters are not necessarily independent of each other.

1. Circuit design optimization

e Circuit layout for latchup prevention.

e Choice of metal thickness for wires to prevent metal migration, to keep

propagation delay within bounds for long wires, and to ensure that satisfactory

power and signal voltage levels are presented to each gate.

e Distribution and buffering of clock lines to control clock skew.

e Speed optimization through optimizing load capacitance and the choice of the

supply voltage.

e Optimization of the dissipated power through choice of the supply voltage Vpp,

load capacitance C,, and switching frequency fp.

e Design to prevent degradation due to charge sharing.

e Choice of clocking strategy.

Chapter 3. The Circuit Cost Quality Model 34

Page 44: A COST QUALITY MODEL

Modify

Layout

Evaluate

NY

r

Initial

Layout

‘\ Layout

Vv. Estimation/

Optimization

Figure 12. Design parameters in the layout design process.

Chapter 3. The Circuit Cost Quality Model

Page 45: A COST QUALITY MODEL

2. Circuit design for testability

e Design modifications to increase controllability and observability.

e Choice of DFT technique, including ad-hoc testing, scan-based approach, built-in

self-test, and physical design for testability.

3. Circuit design for manufacturability

e Adapting the design for the fabrication process.

e Optimizing the layout for yield.

4. Design for cost effectiveness

e Life-cycle cost.

The metric developed in this research addresses layout testability, manufacturability, and

cost effectiveness.

3.1.3. Cost Quality Model for Evaluation

The design process of Figure 12 attempts to reduce manufacturing cost through physical

layout modifications that:

e reduce the number of possible circuit faults,

e make difficult to test faults easier to test,

e make difficult to test faults unlikely, and

e optimize the layout to give maximum possible yield for the fabrication process.

This research develops a metric for evaluating the effectiveness of the layout

modifications. Decisions to change the layout or to retain the original or previous “best”

layout are made based on the metric. For the purpose of this research, the term circuit cost

quality (CCQ) is defined as the extent to which a circuit meets the required functional,

testability and manufacturing specifications. The design space in this research is the circuit

layout. That is, by varying the layout, the value of the CCQ metric changes. The metric

depends on three primary factors:

1. circuit testability,

Chapter 3. The Circuit Cost Quality Model 36

Page 46: A COST QUALITY MODEL

2. circuit yield, and

3. number of test vectors.

The form of the evaluation measure, given by Equation (1), is E= f (X,Y,,Y;). In the

proposed model, X corresponds to the number and set of test vectors required for the

target coverage, which can be optimized for a given set of faults for each layout, Y,

includes yield and number of faults, and Y; corresponds to process technology and fixed

costs, such as design personnel costs, support costs and prototype manufacturing costs.

Circuit cost quality is directly proportional to quality level, QL, and inversely proportional

to the test time, fy , since test time is directly proportional to the number of test vectors.

The quality level is given by Equation (16) to be

QL=Y"

The quality level is used in evaluating CCQ to relate circuit testability and yield.

LE CCQ « os (22)

T

Here, ty is the time to apply the test vectors.

3.1.4. Estimation of the Model Parameters

In the cost quality model developed in Section 3.1.3, the three model parameters, yield,

test transparency and the number of vectors (test time) must be determined for each

alternate layout. Also, a cost estimate must be made for the layout. The yield and the

number of test vectors are outputs of the VLASIC yield simulator [29] and the BODEM

test generator [5], respectively. The use of these tools is described in Section 3.2. This

section discusses testability estimation and cost estimation.

Chapter 3. The Circuit Cost Quality Model 37

Page 47: A COST QUALITY MODEL

3.1.4.1. Testability Estimation

In this research, testability is estimated using the circuit undetectability profile based on

the statistical methods described in Section 2.5. The advantages of using such methods to

estimate circuit testability are reduced computation time because the statistical methods

can estimate circuit undetectability using a small sample of the circuit faults. Statistical

methods also provide a figure of merit for the circuit that is accurate in estimating actual

fault coverage [25]. The use of statistical models for testability estimation in the method

proposed in this work is important. The overhead for test generation and fault simulation

at each design iteration would be considerable for large VLSI circuits, thus significantly

increasing the time for each design iteration. This research estimates test transparency

using the approach below that is an adaptation of Equation (4).

IT = I(n)-— if (I(n)-=) >0 Y Y (23)

1T = 0 if(I(n)-—) <0 Y

Here, 7 1s the number of test vectors at which the undetectability, /(n), is calculated and Y

is the total number of faults present in the circuit. The approximation that the test

transparency is zero when the test vectors provide almost complete fault coverage is made

since statistical models underestimate undetectability for Ippo testing of switch-level

bridging faults when the full set of test vectors is used.

The test transparency is calculated using Equation (23) at an arbitrary number of test

vectors which is fixed for a particular circuit. This value is used to calculate CCQ. Two

statistical approaches are used to calculate the undetectability profile: the Bayesian

approach [25] and the Beta method [9]. A comparison between the two approaches for

switch-level circuits is made to evaluate the effectiveness of the techniques for different

layout sizes. The fault coverage obtained using the complement of Equation (23) is also

compared with the standard fault coverage and the weighted circuit fault coverage, WFC,

obtained from BODEM [6]. BODEM calculates WFC as

Chapter 3. The Circuit Cost Quality Model 38

Page 48: A COST QUALITY MODEL

A

dd; L

iL

wy; i=]

where w; is the relative weight of occurrence of fault fi, d; =1 if fault f, is tested and d; =0

otherwise, and nv is the number of faults.

3.1.4.2. Cost Estimation

This section extends the manufacturing cost calculations in [32] to customer costs. The

parameters used are given in Table 1. Consider a process in which n, chips are

manufactured and tested. The total cost to the producer for the chips, CPp, is given by

CPp= Cp +Cr (25)

The cost of product, Cp, includes costs for design personnel, processing wafers, wafer

test, mask preparation, and packaging, and support such as computers, software, and

training. The cost of test, Cr, is given by [32]:

Cr= Care + Cacp + Cecp + Cyrec (26)

Carc 18 the actual cost to test a chip, Cacp is the cost of bad chips discarded, Cacp is the

cost of good chips discarded, Cyrgc is the cost of not failing a bad chip.

The actual cost to test an IC is

Carc = Crs + Cre + C,. (27)

where Crs is the fault simulation cost, Cr; is the test generation cost and C, is the time

cost of the testing process.

Based on [32], Cacp is given by

Caco = VACpX np X P(DOF)

where VACp is the value added cost of the chip and P(DMF) is the probability that the

chip is defective and is failed. VACp is is given by

Chapter 3. The Circuit Cost Quality Model 39

Page 49: A COST QUALITY MODEL

Table 1. Cost Model Definitions

Cost Definition

CPp Total cost to producer for n, chips

Cp Cost of product, including costs for design personnel,

support, processing wafers, wafer test, mask preparation and

packaging

Cr Total cost of test

Cate Actual cost to test a chip

Crs Fault simulation cost

Cro Test generation cost

C,. Time costs of the testing process

Cacp Cost of bad chips discarded

VACp Value added cost of chip to producer

Cocp Cost of good chips discarded

CNFBC Cost of not failing a bad chip

CP Cost of chip to customer

VACc Value added cost of chip to customer

Csm Customer cost of integrating IC in the system

Cst Customer cost of system test

pm Profit margin

VAC, = —(C, +C;) P

Note that the calculation of Cp and Cyr is recursive because it is expressed in terms of Cp

and Cr. P(DOF) is given by

P(DOF) = (1-Y°"™)

where Y is the process yield, and 77 is the test transparency. So, Cgcp can be expressed as

Cacp = VACpX np x(1-Y°"”) (28)

Chapter 3. The Circuit Cost Quality Model 40

Page 50: A COST QUALITY MODEL

Based on [32], Cgcp is given by

Cocp = VACp x P(DOF)

where P(D CF) is the probability that a good chip is failed. Cacp can be written as

| Cocp = VACp X my X YX 0 (29)

where © is the probability of a Type I error in the testing process.

Based on [32], Cyrac 1s given by

Cyrac = VACp x P(DOF )

where P(DCF ) is the probability that a defective chip is not failed. Cyrac can be written

as

Cyrac = VACp X np X (Y""” -Y) (30)

If the customer buys n, chips, the chip cost to the customer, CPc, is given by

CPc = CPp(] + pm) + P(DA F) x VACc

where pm is the profit margin, P(DOF ) is the probability that a defective chip has not

been failed by the manufacturer, and VAC¢ is the value added cost to the customer. VAC¢

given by

VACc = —CP, (1 + pm) + Csm + Cor P

where + cp, (1+ pm) is the purchase price of the IC, Csy is the cost of integrating the P

IC into a system and Csr is the cost of system test. CP- can be expressed as

CP¢ = CPp(1+ pm) + my X VACc x (YO -Y) (31)

A manufacturer aims to keep chip cost, CPp, and the cost to customer, CPc, to a

minimum and, at the same time, earn an acceptable profit margin.

Chapter 3. The Circuit Cost Quality Model 41

Page 51: A COST QUALITY MODEL

3.2. Supporting Tools

The process and tools used to evaluate circuit cost quality are shown in Figure 13. The

integrated tool suite for circuit cost quality estimation consists of two parallel processes,

test generation and yield prediction. The test generation tool suite estimates the test

transparency and the number of test vectors. The yield prediction tool suite estimates the

functional yield.

The following procedure generates the test vectors. The circuit fault list is extracted using

CARAFE [14,15]. In addition to the circuit layout (circuit.mag), CARAFE receives as

input a technology file and a fabrication statistics file. The output of CARAFE is the file

(circuit.pro) listing the possible faults ranked according to their likelihood of occurrence.

The netlist (circuit.ext) is obtained using MAGIC’s [23] circuit extractor. CARP [5]

generates the list of bridging faults (circuit.bf) and the switch-level description file

(circuit.sdl). These two files are used by BODEM to generate the test vectors. Test

transparency is estimated using the circuit undetectability profile determined using both

the Bayesian model and the Beta model. This permits comparison of the two models. To

perform statistical testability estimation, a fault simulator is required. However, the fault

simulator in BODEM is not available from the command line options, so a different fault

simulator SIM [3], has been used with the statistical models. The output of SIM

(circuit.dp) gives the detection probabilities of each of the faults. The fault detection

probability is used by programs MOD_1 and MOD_2 to estimate circuit undetectability

using the Bayesian and Beta methods, respectively. The “Statistical Testability Estimation”

block shown in Figure 13 requires the use of either program MOD_1 or program MOD_2.

The procedure described above calculates test transparency using the entire fault list. To

take advantage of the statistical models, a random sample of faults can be used for test

generation. The program RAN generates a random sample consisting of ten percent of the

Chapter 3. The Circuit Cost Quality Model 42

Page 52: A COST QUALITY MODEL

circuit.mag

PLAT MAGIC CARAFE

circuit_flat.ext

cifext2vl

vitopack

circuit_flat.pack

circuit.tv

4

VLASIC

SIM

circuit. yield

Circuit Cost

Quality < Statistical Testability Estimation

Figure 13. Cost quality estimation tool suite.

Chapter 3. The Circuit Cost Quality Model 43

Page 53: A COST QUALITY MODEL

entire fault list. This percentage chosen is arbitrarily, but it does reduce the fault list

sufficiently to significantly reduce fault simulation and test generation time. Also, ten

percent of the faults provide a good estimate of circuit testability. BODEM generates test

vectors using this reduced fault list. These test vectors are then used with the fault

simulator SIM. To use the Beta model with a random sample of faults, the procedure

described in Section 2.5.2 is used to estimate the undetectability profile and the test

transparency.

To estimate the circuit functional yield, the layout yield stimulator VLASIC [29,30], as

described in Section 2.6, is used. The circuits used in this research are hierarchical, but

vitopack, a program in the VLASIC yield estimation tool set, works only with completely

instantiated or flat circuits. The program FLAT flattens hierarchical circuits created in

MAGIC so that they can be used with VLASIC. The VLASIC tools include the programs

cifext2vl and vltopack which convert MAGIC layouts to packed binary format for use

with VLASIC (circuit_flat.pack). WLASIC estimates yield based on the technology

description, defect statistics, wafer map, and simulation inputs. Using yield estimates from

VLASIC and test transparency from Equation (23), the test quality level is estimated using

Equation (16). Using the test quality level and the time to test, CCQ is estimated using

Equation (22).

The new programs written for this research are listed in Table 2. FLAT flattens a

hierarchical circuit layout in MAGIC. It consists of two programs, whose inputs and

outputs are shown in Figure 14. The file circuit.iol is a list of circuit inputs and outputs

which is used to remove multiple labeling that causes errors during extraction in MAGIC.

MOD_1 implements the Bayesian model. The model inputs and outputs are shown in

Figure 15. The program SIM has been modified to provide the inputs required for

MOD_1. The modified output file of SIM, circuit.dpmod1, is shown in Figure 15. Figure

Chapter 3. The Circuit Cost Quality Model 44

Page 54: A COST QUALITY MODEL

16 shows the inputs required for the Beta model. The program MOD_2 implements the

Beta model.

Using the tool set in Figure 13, the circuit cost quality can be determined. Knowing the IC

yield, test coverage estimate, probability that the tester fails working chips, and the costs

discussed in Section 3.1.4.2, the cost-quality of different layouts can be analytically

estimated. The use of the method and tools is demonstrated for the ISCAS *85 benchmark

circuits in Chapter 4.

Table 2. Programs Written for this Research

PROGRAM FUNCTION

FLAT Flattens a hierarchical circuit description in circuit.mag

format.

UNLABEL Removes all labels from the flat circuit except single

instances of the primary input/output labels.

MOD_1 Implements the Bayesian model of circuit testability

estimation.

MOD_2 Implements the Beta model of circuit testability

estimation.

SIM_MOD Modifies the program SIM [3] to give outputs required

for MOD_1.

TV Creates the test vector file circuit.tv from the output of

BODEM.

Chapter 3. The Circuit Cost Quality Model 45

Page 55: A COST QUALITY MODEL

FLAT ay! UNLABEL circuit_flat.mag

Figure 14. Flattening a hierarchical MAGIC circuit.

circuit.dpmod |

MOD_1

Circuit

Undetectability

Figure 15. Circuit undetectability using the Bayesian model.

Chapter 3. The Circuit Cost Quality Model 46

Page 56: A COST QUALITY MODEL

MOD_2

Circuit

Undetectability

Figure 16. Circuit undetectability using the Beta model.

Chapter 3. The Circuit Cost Quality Model 47

Page 57: A COST QUALITY MODEL

Chapter 4. Results

This research proposes a method for IC layout design based on the systems approach. An

evaluation method for layouts has been proposed as part of this method. This chapter

presents results from experiments on benchmark circuits using this evaluation metric, the

cost quality model. These experiments were carried out using the circuit cost quality

estimation tool suite shown in Figure 13. The circuits used for the experiments are layouts

of the ISCAS ‘85 benchmark combinational circuits [7] created at the Microelectronics

Center of North Carolina (MCNC). The MCNC layouts are hierarchical and use the

standard cell design approach.

4.1. Undetectability Profiles

This section examines the undetectability profiles of the ISCAS ‘85 benchmark circuits

represented at the switch level. The undetectability profiles of the switch-level circuits are

shown in Figures 17 through 20. The undetectability in these graphs is calculated using the

Bayesian approach. Figure 17 plots the undetectability data calculated when test

generation is performed for the entire fault list, while Figure 18 plots the undetectability

data when test generation is carried out using a random sample of ten percent of the

circuit faults. Ten percent of the faults provides a good estimation of circuit testability, as

is seen by comparing Figures 17 and 18. As a measure of circuit testability, the area under

the curve for each circuit is visually estimated for a number of vectors less than an

arbitrarily chosen value. The smaller the area under the curve, the greater the testability.

The same circuit testability observations are made in both graphs. For the layouts of

circuits c5315, c3540, and c1908, if we consider testability at 20 test vectors, c5315 is

more testable than c3540 which in turn is more testable than c1908. These results indicate

Chapter 4. Results 48

Page 58: A COST QUALITY MODEL

0.5

0.45

0.4

0.35 -

0.3 +

0.25 +

0.2 - Unde

tect

abil

ity

I(n)

0.15 5

1 11 21 31 41 51 61 71 81 91 101

Number of Vectors (n)

Figure 17. Undetectability profiles using the entire fault list.

0.5

0.45

0.4

0.35 +

0.3

0.25 -

0.2 -

Unde

tect

abil

ity

I(n)

0.15 -

0.1 +

0.05 - T ——

1 11 21 31 41

Number of Vectors (n)

Figure 18. Undetectability profiles using a random sample of the fault list.

Chapter 4. Results 49

Page 59: A COST QUALITY MODEL

that using a random sample of switch-level faults provides good comparative results while

reducing the computational overhead of full test generation.

Figure 19 shows the undetectability profiles of five other MCNC layouts. Except for

circuit c17, the undetectability profiles of the other circuits are similar. Some layouts may

be only marginally more testable than others. Circuit cl7 is a very small circuit and

behaves differently from the others, as will also be seen in later sections, because statistical

models assume large fault populations. With a small fault list, statistical observations made

using c17 are inaccurate.

As a means of comparison between gate-level circuits using standard logic observation

testing and switch-level circuits using Ippg testing, the undetectability profiles of circuits

62670, c6288 and c7552 are plotted in Figure 20. Seth, et al. [25] plot the same curves for

gate-level circuits. For the switch-level circuits, the undetectability profiles almost overlap.

However, careful comparison shows that c7552 is marginally more testable than c2670.

Circuit c6288 has an undetectability profile that is nearly the same as that of circuit c2670.

These observations are quite different from those for gate-level circuits. In [25], circuit

c6288 was found to be significantly more testable than the other two circuits. Also, c2670

was found to be slightly more testable than c7552. This observation is due to Ippg testing

for physically realistic faults adopted in this research as opposed to the logic observation

testing method with the stuck-at fault model used in [25]. This observation clearly shows

the importance of considering and testing for low-level fault mechanisms while assessing

testability.

4.2. Yield Analysis

Yield is one of the cost factors considered in the cost quality model. This research

considers functional yield. VLASIC is used to generate layout yield estimates. This section

Chapter 4. Results 50

Page 60: A COST QUALITY MODEL

0.5 0.5 e es = 0.4 = 0.4

= 0.3 = 03 § & 3 0.2 ¥ 0.2

o 3 = 0.1 = 0.1

_ 0 - 0 Saenger

1 2 3 4 5 6 7F 8 1 11 21 31

Number of vectors (n) Number of vectors (n)

(a) cl7 (b) c432

0.5 ~ 0.5

= & = 0.4 ~ 0.4

= 0.3 ; = 0.3

8 N 8 3 0.2 N Y 0.2

o ae S

z OTT. 2 01 ~ 0 SSS Serres oa 0

1 11 #21 #31 41 #51) 61 1 11 21 31 41

Number of vectors (n) Number of vectors (n)

(c) c499 (d) c880

0.5

0.4

0.3

0.2

0.1

ee

ie

Sites He SE ee ee

1 #141 #21 #31 «41~=~«51

Undetectability

I(n)

es

Number of vectors (n)

(e) c1355

Figure 19. Undetectability profiles of MCNC layouts.

Chapter 4. Results 51

Page 61: A COST QUALITY MODEL

0.5

0.45 +

0.4 -

0.35

0.3

0.25 -

0.2 -

Unde

tect

abil

ity

I(n)

0.15 4 07552

0.1

0.05 -

0 r F F F T v 7 t T '

1 11 21 31 41 51 61 71 81 91 101

Number of vectors (n)

Figure 20. Undetectability profiles of MCNC layouts using the entire fault list.

describes the simulation parameters used with VLASIC, provides yield estimates for the

MCNC layouts, and discusses limitations of VLASIC.

VLASIC requires a technology file, a defect statistics file, a wafer map file and simulation

control commands as inputs. The scalable CMOS process (SCMOS) technology file used

is given in Appendix A.2. VLASIC considers eighteen different fault types for SCMOS

technology. They are extra first metal, missing first metal, extra active-MF(first metal)

contact, missing active-MF contact, extra poly-MF contact, missing poly-MF contact,

extra poly, missing poly, extra active, missing active, extra second metal, missing second

metal, extra MF-MS (second metal) contact, missing MF-MS contact, poly-MF oxide

pinhole, MF-MS oxide pinhole, gate oxide pinhole, and junction leakage. A sample defect

statistics file used in the simulation is shown in Appendix B.2. The parameters used in the

defect statistics file must be specified for all the defect types and must be in the same order

Chapter 4. Results 52

Page 62: A COST QUALITY MODEL

as the defect types in the process technology file. The parameters for each defect type, as

explained in [30] are listed below.

e Defect density (d): the mean number of defects of that type per square centimeter.

e Lot alpha (/): the between-lot defect clustering coefficient.

e Wafer alpha (w): the between-wafer defect clustering coefficient.

e Zone radius (D): the radius of the wafer inner zone in centimeters.

e Zone probability (S): the fraction of defects that land in the wafer inner zone,

e Diameter peak (p): the diameter at which the defect density is at its maximum, in

centimicrons (overridden by M).

e Diameter file (M): the filename for the binary file describing the discrete defect

diameter distributions. The discrete diameter distribution is generated using the

program makedist that is part of the VLASIC tool suite. The program uses as input a

file that lists the defect diameters in centimicrons and their corresponding weights. An

example input file to makedist is given in Appendix B.3.

e Diameter bias (b): the bias for minimum width and spacing rules in centimicrons.

e Minimum diameter (7): the minimum defect diameter that causes a circuit fault in

centimicrons.

The values used for these parameters are indicated in the defect statistics file in Appendix

B.2. It should be noted that many of these values used are assumed since actual process

data is not available. It is believed that the values are reasonable assumptions for certain

manufacturing environments.

The simulation was carried out with a 1,000-chip sample with one wafer manufactured per

lot. These values were chosen to reduce simulation time. The yield estimates for the

MCNC layouts are presented in Table 3. No values are available for circuits c5315, c6228

and c7552 because the simulations could not run to completion on the workstation used in

this work due to insufficient memory swap space.

Chapter 4. Results §3

Page 63: A COST QUALITY MODEL

Table 3. Yield Estimates for MCNC Layouts

Yield

Circuit Estimate (7%)

cl7 99.8

c432 78.3

c499 59.9

c880 61.6

c1355 A7.2

c1908 53.4

c2670 20.5

63540 16.3

The utility of VLASIC is limited since it has considerable computational and memory

overhead. To use VLASIC with the MAGIC circuit file format, a packed wirelist file has

to be created. The program cifext2v/ that converts a MAGIC file to a VL format layout,

for use by VLASIC, uses a quadratic algorithm that runs slowly on cells with more than a

few hundred transistors [30]. Also large simulations take many hours. For example,

cifext2vl took 15 hours and 9 minutes to create the VL format file on a DECstation

5000/125 workstation for circuit c3540 and the VLASIC simulation for the same circuit

took | hour and 16 minutes. VLASIC uses about 1,750 bytes of storage per transistor, so

for large VLSI circuits memory overhead is considerable. Because of these limitations

yield estimates for the large layouts cannot be obtained in a reasonable time using

VLASIC.

4.3 Layout Evaluation using the Cost Quality Model

This section presents results from experiments that evaluate and compare the MCNC

layouts and modified versions of the MCNC layouts. The experiments demonstrate the

Chapter 4. Results 54

Page 64: A COST QUALITY MODEL

effectiveness of the cost quality model as an evaluation measure in the proposed iterative

design process. These experiments also investigate whether there is any advantage in using

either the Bayesian or the Beta models for calculating the CCQ value, and the effect of

using a random sample of circuit faults for test generation on the evaluation results.

The time required to apply a fixed number of Ippg test vectors is a design-independent

parameter. The results for CCQ are presented in terms of parameter ¢, defined to be the

time required to apply 100 test vectors for Ippg testing.

Table 4 indicates CCQ values for the original MCNC layouts using the entire fault list.

Table 5 presents CCQ values using a random sample of the circuit faults. From a

comparison of the data in each table, one can conclude that the Bayesian and the Beta

models provide almost the same CCQ figure for a particular layout and fault set. It should

be noted that the CCQ values in Tables 4 and 5 are figures of merit for the particular

layout. They cannot be used to compare two different circuits. If one is interested in

comparing two different circuits, then the CCQ calculation should use the circuit

undetectability for both circuits given the same number of test vectors.

Table 6 presents a comparison between the fault coverage estimates for the original

MCNC layouts using the Bayesian model, the Beta model, standard fault coverage and the

weighted fault coverage. It is seen that for full ATPG with the entire fault list, the

statistical methods overestimate the fault coverage when standard fault coverage is high,

i.e. almost 100 percent. Thus, these statistical methods are not accurate predictors of fault

coverage for the maximum number of test vectors. They are, at best, indicators of circuit

testability. This limitation of the statistical models does not necessarily make them

unsuitable for the methodology proposed in this research since the evaluation measure

requires an indication of circuit testability rather than exact circuit testability value.

Statistical models provide nearly exact testability estimation when testability is calculated

Chapter 4. Results 55

Page 65: A COST QUALITY MODEL

- -

- ST00°0

€Z00°0 LOT

85069 7SgLo

- -

- 7600'0

S710'0 ve

LOOVE 88799

- -

- Z100°0

S£00'0 L9

LOLIS CTESo

881 O8I'T

E91 €700'0

$S00°0 8

SLZET Ose

809° 19°

C07 6700'0

7900'0 Z9

85681 OL979

po6'0 ve6'0

yes 9700°0

7S00'0 LOI

8IE8 80619

pel} 08'I

tL 1€90°0

Z810'0 Sg

SOIL coer

Ev’? Ev'7

9°19 100°0

vL00'0 Ip

vSvS 0839

L8S'I L8S'T

6'6S 1L00°0

L600°0 €9

S6ES 6609

8797 679°7

€8L T810°0

9910°0 8

EV87Z Zev

S71 A

8°66 ve0'0

8L0°0 8

ZL Li?

Reg uvisokeg

| (%)

PIAIA Rog

uvisoke g (Od LV

sy[ne.y WIND

G/ODD=O009 Ten5y)

$10199 A, m4)

suIsplig OOD

ISO JO

JOqUINN] UINUITXe/y

$10199 A. [e210],

ye Ayiqeyoojopuy)

SO]

IST] Ne oMU

oy} SuIs_ sNoAe’]

ONOW JOY OOO

‘page

56 Chapter 4. Results

Page 66: A COST QUALITY MODEL

: -

; [T1000

LITO0 vp

9069 cSSL9

- -

v00°0 9900°0

cS 90PE

88799

- -

- 81100

78100 Sc

TLTS SItso

LOC LOC

¢ 9] 7S00°0

80100 ce

BCL OVSeo

CBE I8'¢

S07 LLIO0

C6100 9¢

9681 QOL979

[8°? S87C

ves 7900

91200 Se

ce8 80619

vOV cE V

CLV 97900

S800

€? ITZ

gcelo

88S L8S

9°19 L100

L7t0'0 Li

SVS 0889

css 9o°¢

66S evr0'0

977200 81

OVS 6669

S79 S79

C8L cS0'0

cv0'0 OT

V8~ cope

Cee Ce Ek

8°66 cS'0

TT? 0

¢ L

LT9

Blog uvisakeg

(%) PIPLX

B10g uvIsokeg

S10}99A PolapIsuo)

NII,

G/ODD=O000 Tens)

$1019 A SOL

syne] 000

Iso], JO JoqunN

winuTxey] suis plig

ye Ayiqeyoojopuy)

[e101

ISTT YN

oy} Jo adweg

wopuey e suse)

snoAP] DNOW

10F OOO *S§ FGBL

57 Chapter 4. Results

Page 67: A COST QUALITY MODEL

68°66 £L°66

C00 '00T c6 66

LOT cSSL9

S166 18°66

V8 °86 91

66 ve

88799

88°66 76 66

T0001 L666

L9 STEgo

6S 66 6S 66

76 66 8°66

v8 OVse9

cS 66

[66 £866

89°66 79

QOL9T9

98°66 98°66

cO TOT SL 001

LOT 80619

6'L6 St 86

9V'86 S686

cS SSel9

OOT OOT

[0°001 89 001

Iv 0889

Lt 66 vy 66

Sv 001 61 O0T

£9 6679

6S °86 97 66

1$°66 L966

BE ctp9

OOT OOT

99° LOT c cOl

8 LI9

IBPIOAND IBVIOAOZ)

Bog uvisadegq

|

SI0IOOA |

WNoMD

yne-y ynej

OSPIOA0Z) ‘ysa,

Jo

PoIysIo piepuris

yney porveunis|

TaquinN

S[OPOP UONUIMSY JSIOIG

BUISA VBeIDAOZ

YN

WNoMD Jo

uostredwo, *9

ajquy,

58 Chapter 4. Results

Page 68: A COST QUALITY MODEL

using a random sample of the fault list and an arbitrary number of test vectors that is less

than the maximum number generated. The failure of the statistical models for circuit c17

should be noted. Inflated fault coverage results are seen for cl17 using both the Bayesian

and Beta approaches due to the small circuit size, as noted previously.

In an iterative, concurrent engineering-based approach to layout design, a layout is

created, evaluated and, based on the evaluation, a decision to retain the design or redesign

is made. Experiments were performed to test the utility of the cost quality model in this

design methodology. The circuits used were c17, a small circuit with 6 gates, c432, with

120 gates and c3540, with 1,669 gates. Two different layouts of each of the three circuits

were created by modifying the MCNC layout. Due to the lack of CAD tools for place and

route, only minor modifications were made to the MCNC layout for each of the two

versions created. The modifications included increasing fault likelihood in some standard

cells and decreasing fault likelihood in other standard cells. Fault probability was increased

by bringing two wires closer to each other and by increasing wire thickness. Fault

probability was reduced by moving wires further from each other and by reducing wire

thickness. The faults were extracted by CARAFE using three different fabrication

processes, fab_1, fab_2, and fab_3. The maximum defect size considered in each of these

fabrication processes is summarized in Table 7. Process fab_3 is the “best” process, while

fab_2 is the “worst.” The fabrication statistics file for a sample process (fab_1) is shown in

Appendix B.

Table 7. Maximum Defect Size in the Fabrication Process

Fabrication Maximum Defect size

Process (centimicrons)

fab_1 400

fab_2 800

fab_3 100 Chapter 4. Results 59

Page 69: A COST QUALITY MODEL

Tables 8 and 9 present results for circuit cl7 using the complete fault list and a random

sample of faults, respectively. For c17, it is seen that the model fails completely when

using either the entire fault list or a random sample of the faults. When there is a change in

the layout and/or fabrication process, there is no change in the CCQ value. This failure

occurs due to two reasons.

1. Circuit cl7 is extremely testable with no redundant or untestable faults. Modifying the

layout to the limited extent done in these experiments does not change layout

testability.

2. Circuit cl7 is a very small circuit. As mentioned earlier, statistical models assume large

fault populations and, therefore, are not accurate in calculating the undetectability

profile for small circuits like c17.

Tables 10 and 11 present CCQ values for circuit c432 using the entire fault list and using a

random sample of faults, respectively. For the fab_1 process statistics, it is seen that layout

Version | has a better (higher) CCQ value than layout Version 2. The same observation is

made when using the entire fault list and when using only a random sample of the faults. A

similar observation is made for the fab_3 process statistics. However, for fab_2 process

Statistics, Version 2 is better when using the complete fault list and Version 1 is better

when using a random sample of faults. This discrepancy occurs due to the approximation

made in Equation (23) where it is assumed that

TT =0 if (1(n)-—) <0

Due to this approximation, the CCQ calculation in layout Version 2 with fab_2 process

Statistics 1s affected only by the number of test vectors and not by the quality level. This

error can be corrected by performing the undetectability calculations at a number of

vectors that is less than the maximum number generated by full test generation. In that

case, test transparency is not zero and CCQ depends on the quality level. In this

experiment, the result obtained using the entire fault list is correct, i.e. with the fab_2

process statistics, layout Version 2 is better than layout Version 1.

Chapter 4. Results 60

Page 70: A COST QUALITY MODEL

Cee Cee

8°66 LSLS'0

Cece 0

€ L

€ qey

Cel Cee

6°86 LSSS'0

88E7 0

¢ L

Z qey

Cee CLE

L'66 Ltvs'0

9170

3 L

T qey

7 UOISIOA,

SC SC

8°66 v86I

0 V8LT

0 Vv

L € ej

Sc $c

6°86 89ET

0 yest

0 Vv

6 7 ey

oC SC

L’66 60¢

0 T0@'0

v L

T qey

[ UOISISA,

Blog UvISOAR

(%) platy

Blog uvIsoAeg

$10199 A, syney

Ssao01g okey]

/O0D

SE OOD Tenis Vy)

S10}99 A, SOL

suIsplg |

woreorIqe,y OOD

SOT, AN

ye Aiiqeuicojopug,

| jo

rJaquinyy |

jo iaquiny

IST] Ne]

9y} Jo

ajdwies wopuey

ve ZUIS]

SONSTIEIS UOMPOTIQe]

pue SINOAR'T

WoIdIIC YUM

£19 10}

OOD

*6 FIGVL

CCl CCl

8°66 99700

T890°0 8

89 © ey

CCl CCl

6°86 9970'0

1890°0 8

89 Z ey

al CCl

L’66 8700

‘0890'0 8

89 T qey

C UOISIOA,

SCI CCl

8°66 9700

0L0°0 8

69 € qey

OOT OOT

6°86 STv0'0

vS0°0 OI

C8 7 ey

CCl CCl

L'66 L700

L90°0 8.

69 T qey

[| WOISIOA

Blog ULISOARY

(%) Platz

raeys| uvisoheg

$10199 A, synej

SSID01g ynokey]

ODD

SI ODO Ten9V)

S10190 A, ISO],

sulsplg |

uoneoriqey OOD

SOL Nye

Aiiqeyojopury, |

jo roquinyy

| jo

Joquinn |

ISV'] YN]

aU

oy} BUISE

soNSHeIS UONPOLIQe

pur sINOAeT

WaIDWIG WWM

£19 10]

OOD 8

aAqeVy

61 Chapter 4. Results

Page 71: A COST QUALITY MODEL

97'S 97'S

€'L8 9700

8£0'0 61

617 € qe

96'S 96'S

€°S9 9€70'0

170'0 8T

vee Z

ey L@9L

S89°L VLL

6L0'0 0S0°0

El 6L7

T qey

Z UOISIOA 6678

1€°8 7°88

€9L0°0 6790'0

ZI 6LZ

€ qey 9L8°¢

Z88°S 719

ZES0'0 €0r0'0

LI vee

Z ey 6078

887°8 EBL

8E01'0 1$90'0

ZI 6LZ

T ey

[ WOISI9 A, e10g

UBISOAR (%)

PIOtA BIOg

uvisodeg $10199 A,

syney SSQ90Ig

jnoAe’T]

G/OOD St. ODD

Ten) 810199

SOL SuIsplig

| woreoliqe,y

OOD

SOT A

ye Amiqeiosjepuy,

| joroquiny

| jo

Joquiny

jsI’J JNey

sy} Jo

adwies wopury

ev BuIsE

SONSTIVIg UOTIBOLIQe

pue SINOAR’T

JWOIIFIG YUM

ZEpd IO}

OOO

“IL FABL

STE SET

€°L8 ZL10°0

LS10°0 eV

06L7 € qey

EV’? rr'z

€°S9 S9T0°0

LS10°0 Iv

ZVEE Z Qey

ELVZ pLVZ

ULL SL10°0

yST0'0 OV

0622 T 9ey

Z UOISIO A, 660°C

6607 7'88

ZLI0°0 L9T0'0

OV 06L7Z

€ qey ILUZ

ELV'Z Z'L9

S910°0 cvl0'0

OF ZVEE

Z Ry 6LET

8E'7 €°8L

0810°0 Z910'0

zy 06L7

T qey

| WOISIOA

ays | uvisokeg

| (%)

plaly BI0g

uvIsoAeg $10}99 A,

syne SS9d0Ig]

noe]

32/090 St ODD

TensVv) S10}99A,

SOL suisplig

| uoneorqey

OOD

SOL A

ye Aipiqeyolopuy,

| joroquiny |

jo Joquinyy

TT YN

oMjUy

oy} BUIS

1) SOYSHBIS UOHeoIgey

pue synoAe’] yUotoHiq YUM TEpo

IO} O00

‘OT FAIGBL

62 Chapter 4. Results

Page 72: A COST QUALITY MODEL

Tables 12 and 13 present CCQ values for circuit c3540. In these experiments it is seen that

for the fab_1 process statistics, layout Version 1 is better, i.e. it has a higher CCQ value

when using either the complete fault list or a random sample of faults. It is also seen that

layout Version 2 is better when using the fab_3 process statistics. This observation holds

when using either the full or random fault list.

IC life-cycle cost considerations also need to be taken into account before a decision to

manufacture is made. Section 3.1.4.2 presented cost calculations to be considered. Due to

the lack of actual cost estimates, this research does not calculate life-cycle costs to the

manufacturer or the customer. However, if such data was available, the costs could be

calculated directly using the equations in Section 3.1.4.2 and the results given above.

4.4. Summary

From the experiments, it is seen that the CCQ value obtained using the Bayesian approach

and the Beta model are almost the same. There is no significant advantage to be gained

from adopting either particular statistical model for evaluating CCQ. It is also seen that

the evaluation decision using test generation with a random fault sample is the same as it is

when the entire fault list is used, except for circuit c17. There is a cost advantage in using

statistical models with a small random sample of the fault list rather than full test

generation for all the faults. The results also show that due to the approximations made in

the adopted approach, inaccuracies can occur in the comparison between alternate layouts

when /(n), the undetectability, is estimated at the maximum number of test vectors, N,

generated. CCQ estimation should be carried out with n < N, Le. at an arbitrarily selected

number of test vectors less than the maximum number generated by the test generator.

Most importantly, it was seen that the cost quality model proposed is an effective

evaluation measure for medium and large circuits. Layout modifications lead to different

Chapter 4. Results 63

Page 73: A COST QUALITY MODEL

‘payuasoid are

sy]Nsai OU

OS ‘QPS Ed

INOIIO Joy

sousHes ssado1d

7 qey

oy JOJ

STE} ATVUVO

|

£9°C 9°?

BE 71100

Se10°0 Be

LOET € qey

L@ Le?

6LT eC10

0 cvlO'0

Le LOET

T qey

C UOISIOA

SC SC

L'6e SOT0'0

8c100 OV

LOE? € qey

ScOE COE

C6

éSTO0 £9100

ce LOET

T qey

[ UOISIOA

Blog uvisadeg

(%) PIar

B10q uvIsoAeg

$10}99A, syne

SS3901g inode]

/ODD St OOO

Temsy) S10199A,

ISO], susplig

| uoreorqes

00D

SOL Nye

Aiiqeioajepug, |

joroquinNy |

jo Joquinyy

Ast] yne{

yi Jo

adureg wopuey

e SuIsr

Sorstielg WOTeOLIQe,;

pue SINOAL'T

JUDIDFIC YUM

OpSEd OJ

OOD

“ET PBL

87721 Lye

lt BE

€v00°0 9S00°0

08 L90ET

€ ey

Lye | Src

6LI vv00'°0

9600°0 08

L90CT T

ey ¢ UOISIS A

SLI SLIT

L6t 9700°0

€S00°0 c8

L90ET € qey

6L¢ 1

LL7T c'6l

0S00°0 LS00°0

8L L90ET

T ey

[ UOISIOA

Blog uevisoseq

|

(%) PIX

e10g uvisodeg

SI0}99 A, syne

“$S99001g ynoAe7y]

G/ODD

SE ODO Ten1V)

$10}09 A, OL

sulspug |

woreoLqe,y OOD

SOL Aye

AIpqeioajopuy, |

jo roquinyy

| jo

1oquiny

ASV] yNey

WU

oy) Susy)

SONSTIBIg WorVoliqe,]

pue SINOART

JOJO

YIM OpEd

10} ODD

“ZI alae

Chapter 4. Results

Page 74: A COST QUALITY MODEL

CCQ values. Using the CCQ metric, a redesign decision can be made in the iterative IC

design process proposed in this research.

Chapter 4. Results 65

Page 75: A COST QUALITY MODEL

Chapter 5. Conclusion

3.1. Summary

This research applied the systems approach to product development to integrated circuit

design. The systems approach considers functional and cost effectiveness in every stage of

the IC life cycle. This research focused on circuit layout in the detailed design stage. The

proposed design method iterates on layout changes to create alternative layouts. It

attempts to improve the layout in each iteration based on physical design for testability

principles. The new alternative is evaluated and compared to the previous baseline layout

based on controllable decision variables, design-dependent parameters and

design-independent parameters. A mathematical model is required to perform this

evaluation. This research developed an evaluation measure, circuit cost quality (CCQ).

The controllable decision variable in the model is the number of test vectors required to

achieve the target fault coverage. The design-dependent variables include the number of

faults and the yield. The design-independent variables are the fabrication process and fixed

cost factors. In developing this model, this research extends known techniques to estimate

test quality and testability for gate-level circuits to switch-level circuits. To provide the

best possible product quality, the method uses inductive fault analysis techniques to

produce a list of physically realistic faults at the switch-level of circuit abstraction, test

generation tools for Ippo test of the set of switch-level bridging faults, statistical models

for estimating circuit testability to reduce overhead, a layout yield estimation tool and test

quality and cost evaluation metrics.

Chapter 5. Conclusion 66

Page 76: A COST QUALITY MODEL

Experiments were performed using layouts of the ISCAS °85 benchmark circuits to

evaluate the utility of the cost quality model. Important findings are listed below.

e The cost quality model is an effective measure that can be used to obtain a figure of

merit for layouts of medium and large circuits.

e Gate-level testability differs from switch-level testability using Ippg test, as is seen by

comparing gate-level circuit testability and switch-level circuit testability for the same

circuits. Switch-level testability and Ippg test must be taken into consideration to

maximize the quality of shipped ICs.

e There is no significant advantage to be gained from adopting either the Bayesian or

Beta statistical models for use in the CCQ metric.

e Use of the statistical models reduces design time without adversely affecting the utility

of the evaluation. The CCQ values obtained using the statistical models with a random

sample of the fault list provides the same decisions based on layout evaluation as those

obtained using the entire fault list.

5.2. Directions for Future Research

There are several topics for further research that are a direct extension of this work. A

Statistical approach that shows greater fidelity in coverage prediction when using a switch-

level bridging fault model with Ippo testing could be investigated. The development of

CAD tools to support the concurrent engineering-based IC design process proposed in

this work is another area for future research. This would include the development of a tool

set to automatically modify layout based on P-DFT principles and the evaluation

performed using the cost quality model. Another suggestion for research is the extension

of the CCQ metric to incorporate more design-dependent parameters that can be modified

to improve the design.

Chapter 5. Conclusion 67

Page 77: A COST QUALITY MODEL

Bibliography

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

M. Abramovici, M.A. Breuer and A.D. Friedman, Digital Systems Testing and

Testable Design, Computer Science Press, New York, NY, 1990.

J.M. Acken, “Testing for Bridging Faults (Shorts) in CMOS Circuits,” Design

Automation Conference, 1983, pp. 717-718.

S. Almajdoub, private communications, May 1994.

B.S. Blanchard and W.J. Fabrycky, Systems Engineering and Analysis, Second

Edition, Prentice Hall, Englewood Cliffs, NJ, 1990.

S.W. Bollinger, “Hierarchical Test Generation for CMOS Circuits,” Ph.D.

Dissertation, Virginia Polytechnic Institute and State University, 1992.

S.W. Bollinger and S.F. Midkiff, “Test Generation for Ipng Testing of Bridging

Faults in CMOS Circuits,” [EEE Transactions on Computer-Aided Design, to

appear.

F, Brglez and H. Fujiwara, “A Neural Netlist of 10 Combinational Benchmark

Circuits and a Target Translator in Fortran,” /nternational Symposium on Circuits

and Systems, 1985, pp. 695-698.

I.D. Dear, C. Dislis, A.P. Ambler and J. Dick, “Economic Effects in Design and

Test,” [EEE Design and Test of Computers, vol. 8, no. 4, pp. 64-77, December

1991.

H. Farhat and S.G. From, “A Beta Model for Estimating the Testability and

Coverage Distributions of a VLSI Circuit,’ JEEE Transactions on

Computer-Aided Design, vol. 12, no. 4, pp. 550-554, April 1993.

Bibliography 68

Page 78: A COST QUALITY MODEL

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

F.J. Ferguson and J. Shen, “A CMOS Fault Extractor for Inductive Fault

Analysis, EEE Transactions on Computer-Aided Design, vol. 7, no. 11,

pp.1181-1194, November 1988.

F.J. Ferguson and T. Larrabee, “Test Pattern Generation for Realistic Bridge

Faults in CMOS ICs,” [nternational Test Conference, 1991, pp. 492-499.

F.J. Ferguson, “Physical Design for Testability for Bridges in CMOS Circuits,”

VLSI Test Symposium, 1993, pp. 290-295.

A.V. Ferris-Prabhu, /ntroduction to Semiconductor Device Yield Modeling,

Artech House, Inc., Norwood, MA, 1992.

A. Jee, “Carafe User’s Manual Alpha.3 Release,” University of California at Santa

Cruz, June 1992.

A. Jee and F. J. Ferguson, “Carafe: An Inductive Fault Analysis Tool for CMOS

VLSI Circuits,” VLST Test Symposium, 1993, pp. 92-98.

C. Kooperburg, “Circuit Layout and Yield,” JEEE Journal of Solid State Circuits,

vol. 23, no. 4, pp. 887-892, August 1988.

W. Maly, “Yield Models - Comparative Study,” edited by C.H. Stapper, et al.,

Defect and Fault Tolerance in VLSI Systems, vol. 2, Plenum Press, New York,

NY, 1990.

E.J. McCluskey and F. Buelow, “IC Quality and Test Transparency,” /nternational

Test Conference, 1988, pp. 295-301.

G. Moore, “What Level of LSI is Best for You?,” Electronics, vol. 43, no. 4, pp.

126-130, February 1970.

V.N. Nickel, “VLSI - The Inadequacy of the Stuck-At Fault Model,” [EEE Test

Conference, 1980, pp. 378-381.

P. Nigh and W. Maly, “Test Generation for Current Testing,” European Test

Conference, 1989, pp. 194-200.

T. Okabe, et al., “Analysis on Yield of Integrated Circuits and a New Expression

for the Yield,” Electrical Engineering in Japan, vol. 92, pp. 135-141, December

1972.

Bibliography 69

Page 79: A COST QUALITY MODEL

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

J.K. Ousterhout, et al., “The Magic VLSI Layout System,” JEEE Design and Test

of Computers, vol. 2, no. 1, pp. 19-30, January 1985.

R.B. Seeds, “Yield and Cost Analysis of Bipolar LSI,” /EEE International

Electron Devices Meeting, p. 12, October 1967.

S.C. Seth, V.D. Agrawal, and H. Farhat, “A Statistical Theory for Digital Circuit

Testability,” JEEE Transactions on Computers, vol. 39, no. 4, pp. 582-586, April

1990.

J.P. Shen, W. Maly, and F.J. Ferguson, “Inductive Fault Analysis of MOS

Integrated Circuits,” JEEE Design and Test of Computers, vol. 2, no. 6, pp. 13-26,

December 1985.

P. Varma, A.P. Ambler and K. Baker, “An Analysis of the Economics of Self

Test,” International Test Conference, 1984, pp. 20-30.

R.L. Wadsack, “Fault Modeling and Logic Simulation of CMOS and MOS

Integrated Circuits,” Bell System Technical Journal, vol. 57, no. 3, pp. 1449-

1473, May-June 1978.

D.M.H. Walker, Yield Simulation for Integrated Circuits, Kluwer Academic

Publishers, Boston, MA, 1987.

D.M.H. Walker, “VLASIC System User Manual Release 1.3,” SRC-CMU

Research Center for Computer-Aided Design, Department of Electrical and

Computer Engineering, Carnegie Mellon University, August 1990.

N. Weste and K. Eshraghian, Principles of CMOS VLSI Design: A Systems

Perspective, Second Edition, Addison-Wesley Publishing Company, Reading, MA,

1993.

R.H. Williams and C.F. Hawkins, “Errors in Testing,” Jnternational Test

Conference, 1990, pp. 1018-1027.

T.W. Williams and N.C. Brown, “Defect Level as a Function of Fault Coverage,”

IEEE Transactions on Computers, vol. C-30, no. 12, pp. 987-988, December

1981.

Bibliography 70

Page 80: A COST QUALITY MODEL

Appendix A. CARAFE and VLASIC Technology Files

The SCMOS technology files used with CARAFE and VLASIC are presented here.

A.Il. CARAFE Technology File

# This is the tech file for the mene cells

tech

semos

end

planes trans

metall

metal2

oxide

end

types

trans polysilicon trans ndiffusion

trans pdiffusion trans ntransistor

trans ptransistor

trans nwell

trans pwell metall metall

metal2 metal2

oxide checkpaint end

contact

polycontact polysilicon metall ndcontact metall ndiffusion

pdcontact metall pdiffusion m2contact metall metal2

psubstratepcontact metall pwell nsubstratencontact metall nwell

end

connect

polysilicon ntransistor polysilicon ptransistor

end

compose

Appendix A

Page 81: A COST QUALITY MODEL

trans ntransistor ndiffusion, polysilicon trans ptransistor polysilicon, pdiffusion

end

# since this is for magic file only, there are no calma numbers

calma

end

extract

fet on ntransistor GND! _ ndiffusion

fet p_ ptransistor GND! pdiffusion end

route

trans polysilicon,ntransistor,ptransistor,pdiffusion,ndiffusion metall metall

metal2 metal2

end

bridge polysilicon metal] metall metal2

ndiffusion metall

pdiffusion metal1 end

fault

bridge n 200 4000 1

break p 200 4000 1 end

color polysilicon Red pdiffusion Gold ndiffusion Green

pwell LimeGreen nwell Orchid

ntransistor FireBrick

ptransistor Khaki

metall Blue

metal2 Violet

end

A.2. VLASIC Technology File

* Technology file syntax is: * comment - line starting with * * input - line starting with keyword, lines of stuff, then end

* Technology file for double-metal P, N, or twin-well scaleable CMOS. * Contacts on top of channel are illegal

* Process is self-aligned so active intersect poly form channel regions. * Extra contacts are assumed not to occur.

Appendix A 72

Page 82: A COST QUALITY MODEL

* Extra/missing select implants, wells are not included. * Cifext2vl and ENTICE make NX = CAA and CPG, and makes CAA = CAA - CPG, * i.e. diffusion regions,

inputlayers CWN CWP CMS CMF CPG CAA CVA CCA CCP CSN CSP COG XP NX end * what Magic plane 0-N each conducting layer lies in * planes 0-5 are used by Magic V6, 0-2 by Magic V4 * <layername> <plane number> planes

CAA 6 CPG 6 CMF 7 CMS 8 * CWN 9 * CWP 9 * COG 10 end * layer combinations separated by one oxide layer * the bottom and top layers must be previously defined * <overlapname> <botlayer> <toplayer> overlap

* this only inserts values into gen, gennet arrays, which are overwritten by * generatedlayers commands AAPG CAA CPG PGMF CPG CMF MFMS CMF CMS end* layers that are not input layers are generated layers * the two source layers must be previously defined * <srel> <src2> <result> <deletel> <delete2> <proc> <merge> <warn> <net1> <net2> generatedlayers * really a NOOP, but included since some circuit extractors do not provide * transistor channels on the NX layer

* overwrites AAPG overlap definition AAPG AAPG AAPG False False NULL False False L2N1 L1N1 end* complete and incomplete via layer stackings * <vianame> <bottomlayer> <vialayer> <toplayer> <bvincomplete> <vtincomplete> via

AACAMF CAA CCA CMF NULL NULL PGCPMF CPG CCP CMF NULL NULL MFVAMS CMF CVA CMS NULL NULL end* All layer combinations that can contain a transistor source/drain

Appendix A 73

Page 83: A COST QUALITY MODEL

* terminal net number. * Used by FollowNet in fault.c when determining transistors hooked to a net. * <layername> sourcedrain CAA end * All layer combinations that can contain a transistor gate terminal * net number. Used by FollowNet in fault.c when determining transistors * hooked to a net. * should infer DP, PNB is special case. * <layername> gate

CPG AAPG end * what via layers electrically connect to what other layers * <layerl> <layer2> * Jayer2 is connected to layer1, this is not necessarily symmetric

viaconnect

end * what conducting layers electrically connect to what other layers * <layerl> <layer2> * layer2 is connected to layer1, this is not necessarily symmetric

layerconnect end * defect definitions * <defectname> <extra/missing/pinhole type> <layer it occurs on> <printname> defects POSMF + CMF Extra First Metal NEGME - NULL Missing First Metal POSCA + CCA Extra Active-MF Contact NEGCA - NULL Missing Active-MF Contact POSCP + CCP Extra Poly-MF Contact NEGCP - NULL Missing Poly-MF Contact POSPG + CPG Extra Poly NEGPG - NULL Missing Poly POSAA + CAA Extra Active NEGAA - NULL Missing Active POSMS + CMS Extra Second Metal NEGMS - NULL Missing Second Metal POSVA + CVA Extra MF-MS Contact

NEGVA - NULL Missing MF-MS Contact PIN1 p NULL Poly-MF Oxide Pinhole

PIN2 p NULL MF-MS Oxide Pinhole PING p NULL Gate Oxide Pinhole PINJ p NULL Junction Leakage end* layer combinations that interact with the defect for each fault type * could infer this using layer stackings

* <defectname> <faultname> <layername> * all layers must be previously defined

* temporarily split OPEN into SPAN and OPENVIA, and add NEWVIA with SHORT * NEGCA, NEGCP, NEGVA assumed not to occur.

* Some fault analysis routines may waste time by merging the active and * channel regions, assuming active == CAA. interacting POSMF SHORT CMF

Appendix A 74

Page 84: A COST QUALITY MODEL

NEGMF SPAN CMF NEGMF OPENVIA AACAMF NEGMF OPENVIA PGCPMF NEGCA OPENVIA AACAMF NEGCP OPENVIA PGCPMF * unused - SD is wired in? POSPG NEWGD CAA POSPG SPAN CAA POSPG OPENVIA AACAMF POSPG SHORT CPG POSPG SHORT AACAMF * unused - SD is wired in? NEGPG SHORTD CAA NEGPG SPAN CPG

NEGPG OPENVIA PGCPMF * unused - POLYSILICON is wired in? POSAA NEWSDD CPG POSAA SHORT CAA NEGAA OPEND CPG NEGAA SPAN CAA NEGAA OPENVIA AACAMF POSMS SHORT CMS NEGMS SPAN CMS NEGMS OPENVIA MFVAMS NEGVA OPENVIA MFVAMS PIN1 NEWVIA PGMF PIN2 NEWVIA MFMS PING NEWVIA AAPG PINJ NEWVIA CAA end* layer that masks a defect for a fault type * should be determined from process flow description * Since POSAA is on the CAA layer and AAPG is a transistor channel * then CPG splits POSAA. * <defectname> <faultname> <layername> mask POSAA SHORT CPG

end * layer stackings with special meanings for transistors * <SD/GATE/POLYSILICON> <layername> * all layers must be defined * Cifext2vl assumes that SD and POLYSILICON are input layers. ReadTechnology * enforces this. speciallayers SD CAA GATE AAPG POLYSILICON CPG end * random integer values

* <name> <value> ints MAXDEFDIAM 1800 end

Appendix A 75

Page 85: A COST QUALITY MODEL

Appendix B. Fabrication and Defect Statistics Files

This section presents the fabrication and defect statistics files used with CARAFE and

VLASIC for the three fabrication processes. The maximum defect size for each process,

fab_1, fab_2, and fab_3, is shown in Table 7. The files for each of the processes are similar

to the ones shown here for fabrication process 1 (fab_1). This section also presents an

example discrete distribution file used by the program makedist.

B.1. CARAFE Fabrication Statistics File (fab_1)

# fab file for mene cells

fab

scmos end

types

polysilicon ndiffusion

pdiffusion ntransistor

ptransistor

nwell

pwell

metall metal2

polycontact ndcontact pdcontact m2contact

end

radius

400

end

bridge 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

Appendix B 76

Page 86: A COST QUALITY MODEL

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

end

break

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

end

B.2. VLASIC Defect Statistics File (fab_ 1)

-d 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 1050.5 0.50.5 0.50.5 0.5 0.5 0.50.50.50.5 0.50.5 0.50.5 0.5 0.5 -w 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 -p 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 -b 50 50 0 50 0 50 50 50 50 50 50 50 050 0 0 0 0 -r 300 300 300 300 300 300 150 300 150 300 300 300 300 300 300 300 300 150 D555 55 5555555 5 55 55 5 5 Sti 2121212212222212232133122 -M df_n1i_b df_n1_b NULL df_ni_b NULL df_n1_b df_n1_b df_ni_b df_n1_b df_n1_b df_n1_b df_n1_b NULL df_n1_b NULL NULL NULL NULL

B.3. Discrete Diameter Distribution

This file is an input to the program makedist to generate the binary discrete diameter

distribution file (df_n1 in fab_1).

9 0400 1.0000 0490 0.3000 0540 0.2000 0590 0.1500 0640 0.1200 0840 0.1000 1040 0.0800 1440 0.0100 1840 0.0000

Appendix B 77

Page 87: A COST QUALITY MODEL

Vita

Sandeep Deshpande was born in Bombay, India. He attended Bombay Scottish School,

from where he graduated in March 1986. From June 1986 to June 1988 he attended the

D.G. Ruparel Junior College, Bombay. In the Higher Secondary Certificate examination,

1988, he was placed first in mathematics in the state of Maharashtra. From July 1988 to

July 1992 he studied electronics engineering at the Victoria Jubilee Technical Institute

(VJTI), University of Bombay. He graduated from VJTT in July 1992 with a bachelors

degree in electronics engineering with a first class (honors). From August 1992 to

September 1994 he attended the Virginia Polytechnic Institute and State University from

where he obtained a masters degree in electrical engineering in October 1994. Sandeep is a

member of Phi Kappa Phi honor society. His other interests include swimming, writing

poetry and reading.

Vita 78