testing

Post on 18-Jan-2016

13 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

testing

TRANSCRIPT

UNIT V

Testing of analog and digital circuits

1

Introduction

• PCB assemblies of commercial off the self

items such as operational amplifiers etc

• Uncommitted analogue arrays dedicated to the

required system specification by the final

custom metallization mask.

• Full custom IC designs invariably designed by

a vendor or specialist design team.

2

Controllability and Observability

3

OEM requirements

Analog only Special circuits

ADC.DAC,Filters

Moxed mode

PCB assembly of std off the self parts

Analog

array

Full

custom

Specialist design

house testing

OEM Final test

Linear Analog System Design-advantages

1. More accuracy, because a DSP signal processor has more accuracy and avoids various errors and information loss due to unwanted analog components or to analog components that are operating outside of their specifications.

2. Easier design and testing, because the remaining analog circuit portion is linear. However, incoming analog signals need linear filtering and A/D conversion, so there is a limit to how much analog circuitry can be eliminated. Also, there are significant tradeoffs between analog and digital implementations. An analog implementation of a filter can use significantly less power than a DSP processor, and it may be much faster. This could be critical for portable, wireless devices.

4

Analog Vs digital testing

1. Size is not a limitation. Analog circuits have relatively few devices, at most

50 to 100, unlike digital circuits, which have reached 50 million transistors. The number of analog inputs and outputs is small.

2. Modelling is far more difficult than in digital circuits:

There is no widely-accepted analog fault model like the digital stuck-at and path-delay fault models.

• There is infinite analog signal range, and a range of good signal values.

• Acceptable signal tolerances depend on process variations and measurement

inaccuracies.

• Modelling accuracy during analog fault simulation is crucial, unlike during

digital fault simulation.

• Analog circuits include noise, which must be modelled and also tested for.

• Measurement error occurs, due to the load on the ATE pin, impedance

of the pin, random noise, etc.

• Capacitive coupling between digital and analog circuit substrates in a

mixed-signal circuit (MSC) is another source of noise.

5

Contd.. • Absolute analog component tolerances can vary by 20%, but

relative component matching can be as good as 0.1%. Circuit

functionality is designed to depend on component ratios .

• A multiple fault model is mandatory, but faults in multiple

components can cancel each other’s effect, so not every multiple

fault is a real fault.

• The multiple fault set is too large to enumerate.

• There is no unique direction of information flow in an analog

circuit, whereas there usually is in digital CMOS circuits, with the

exception of CMOS C-switch busses. The C-switch realizes a

simple connect/ disconnect switch between two sub-circuits.

3. Decomposability: Sub-components cannot be tested individually in an

analog IC in the way that they can be in a digital IC.

6

4. Test busses are much harder to realize in analog than in

digital circuits:

• Transporting an analog signal to an output pin may

alter the analog

•signal and the circuit functionality.

•Reconfiguring an analog circuit during test is often

unacceptable, unlike the case of digital circuit

testing. The reconfiguration hardware can

unacceptably change the analog circuit transfer

function, even when it is turned off.

7

Contd..

Contd.. 5. Testing Methods:

• Both analog and digital circuits have structural ATPG methods,

but because of a lack of well-accepted analog fault models and the

lack of a mapping between structural analog faults and analog

specifications, structural analog ATPG is not widely used.

• Specification-based (functional) test methods exist for both analog

and digital circuits. However, functional testing is rarely used for

digital circuits, because the number of tests is intractable.

Conversely, in analog testing, specification-based tests are most

often used, because they are tractable and need no fault model.

• Digital circuits can be tested separately for logic functionality

(stuck faults) and timing performance (path-delay faults.) However,

these two types of tests cannot be separated in analog circuits, and

are combined .

8

Fault models

• Deviate faults –parameter value changes

• Catastrophic faults- wide deviation in

parametric value form nominal value.

• Dependent faults- fault occurs due to

parameter variation fault.

• Environmental faults-

• Intermittent faults- usually catastrophic when

they occur but only temporary.

9

Actual faults affecting performance

• MOSFET short circuits

• MOSFET floating gates

• Open circuit and short circuit capacitor and resistor

• BJT-short circuit between base-emitter, emitter-

collector, to-substrate faults

• Leaky junctions

• Pin holes and other fabrication faults

• Mask miss-alignment, wafer fabrication specification

10

Analog testing

• Fault modeling techniques inadequate to detect faults.

• Requirement of simulation for selected fault

• Higher level commercial simulators (HELIX) not yield finer details.

• Spice –time consuming for large circuits.

• Functional tests-to detect any un-accepted deviation .

• Stuck-at modeling+ ATPG= solution

11

Parametric Tests and Instrumentation

12

Simulate healthy circuit to give

fault free transfer function

Prepare chosen fault/out of

tolerance test

With appropriate input signal

simulate the circuit to give

transfer function with this

fault

All faults

covered

yes No

Range of analogue only circuits

• Amplifiers

• Regulators

• Oscillators(VCO)

• Comparators

• PLL

• Analog multipliers

• Analog filters

13

14

Fault-Model Based Structural Analog

Testing

Analog fault models

Analog Fault Simulation

DC fault simulation

AC fault simulation

Analog Automatic Test-Pattern Generation

Using Sensitivities

Using Signal Flow Graphs

15

Types of Structural Faults

• Catastrophic (hard):

– Component is completely open or completely

shorted

– Easy to test for

• Parametric (soft):

– Analog R, C, L, Kn, or Kp (a transistor K

parameter) is outside of its tolerance box)

– Very hard to test for

16

Levels of Abstraction

• Structural Level

– Structural View – Transistor schematic

– Behavioral View – System of non-linear partial

differential equations for net list

• Functional Level

– Structural View – Signal Flow Graph

– Behavioral View – Analog network transfer

function

17

Analog Test Types

• Specification Tests

– Design characterization – Does design meet

specifications?

– Diagnostic – Find cause of failures

– Production tests – Test large numbers of

linear/mixed-signal circuits

Present-Day Analog Testing Methods

• Accuracy:

• Speed:

• Ease of Operation:

• Modelling Convenience:

• More Measurement Information:

• Size and Power:

18

Special Signal Processing Test Strategies

• Disadvantage of using analog instrumentation include-

– Difficulty of setting input parameters to exact or repeatable values.

– Inaccuracy in measuring and/or reading output parameters.

– Difficulty of synchronizing two or more test inputs which may be necessary to some analogue tests.

– Slow response of all measuring instruments which relay upon mechanical movements to give their output testing.

19

DSP Test Data-

• Analysis of phase and magnitude by FFT

obtained from single pass instrument.

• BIST strategies can be used.

Pseudorandom dc test data-

• M-length sequences can be generated.

• Expensive.

20

Functional DSP-Based Testing

21

DSP Based Testing

• A memory for storing readings, and a DSP-processor controlled by

software routines.

• The test engineer creates a series of vectors as the test waveform. The

waveform synthesizer feeds the vectors to a DAC, usually in a

continuous loop.

• The DAC output is de-glitched (hazards are removed) and passed

through a reconstruction filter to obtain continuous, band-limited

waveforms.

• The filter output is applied to the DUT, which generates response

output analog signals.

• The waveform digitizer digitizes the analog response coming from the

DUT using a high-speed A/D converter, and stores the samples in a

RAM.

22

DSP Based Testing

• When the DUT is a mixed-signal circuit, digital device outputs are collected digitally in the receive memory and digital test waveforms are sent to the DUT from send memory.

• The DSP-based ATE has a burst mode vector transmission capability, in which a large number of vectors can be transmitted, along with a starting address and a vector length. This is also called vector bus architecture, and typically it takes 1 ms to transfer 1000 16-bit integers.

• For comparison, it takes about 5 ms to connect or disconnect a test circuit by switching relays.

23

DSP Based Testing

• The DSP-based ATE is nearly always more accurate than analog ATE. In particular, it is more accurate for non-linear analog circuit measurements, because vector processing increases the accuracy of a set of vectors compared to the accuracy of an individual sample.

• Because the DSP processor multiplies and divides the various samples, high precision DSP processor is essential.

24

Features of DSP-Based Testers

• The DSP-based ATE is nearly always more accurate than analog ATE.

• Even more mathematical precision is needed because many DSP algorithms produce cumulative error.

• A rule of thumb is that the mathematical processor should have at least three decimal orders of precision beyond what is desired in the end result.

25

Phase-Lock Synchronization

Requirement-

• Vector samples must fall in exactly the right places in the right time interval, or testing will be very inaccurate.

• The digitizing window must be coordinated precisely with each clock, signal, and distortion component. This is implemented with a common reference frequency, controlled by a phase-locked loop (PLL).

• Synchronization gives the DSP system a coherence property, in which all frequency and time functions are programmable related in exact whole-number ratios.

26

Waveform synthesizer concept

27

Waveform Synthesis

• It has a programmable reconstruction filter for a 16-bit DAC to produce continuous analog waveforms, by eliminating stair casing in the analog output signal. We also need sin(x)/x (sinc) correction applied to the signal spectrum during digital pattern synthesis.

• Waveform sampling and digitization- It is complex, and contains a differential buffer, a programmable-gain amplifier (PGA), and a programmable anti-aliasing filter. It needs a built-in track-hold circuit in the analog-to-digital converter (ADC.)

• A typical instrument has three independent PLLs, which are needed in telecommunications testing to generate and coordinate several different clock rates at once.

28

• The waveform from the DAC is sent through programmable course and fine attenuators to set the proper level. These attenuators keep the DAC working at its full-scale value to minimize quantization noise. This also allows addition of programmable DC offset to the synthesized signal.

• Frequency synchronization requires that clocking for the digital pins, analog waveform synthesizer, and analog waveform sampler be derived from a single crystal reference.

• This allows different modules in the ATE to use different frequencies, which have known fixed integer ratio relationships between them. All of the clocking for the WS and WM comes from the Pacemaker, and is chosen for the waveform

29

Waveform Synthesis

Mixed-signal testing problem and

system-on-a-chip.

30

ADC/DAC Testing

• Specialized testing methods are used for ADC and digital/analog converter (DAC) testing.

• The ideal ADC discards data, while the ideal DAC does not lose information.

• The ADC cannot be tested by a DC test and examination of the digitized output for correct codes, because noise, statistical converter behaviour, and converter slew rate errors may cause such a test to pass defective ADCs, or to fail good ADCs.

• ADC testing is non-deterministic and must involve more statistical analysis than DAC testing. Even good ADCs have noise and occasional bursts of extraneous (sparkle) codes.

31

Ideal ADC and DAC transfer

functions

32

Transmission vs. Intrinsic

Parameters • The transmission parameters (or performance parameters),

which indicate how the channel where the converter is embedded affects a multi-tone test signal. Transmission parameters are

• Gain,

• Signal-to-distortion ratio

• Inter modulation (IM) distortion

• Noise power ratio (NPR),

• Differential phase shift,

• Envelope delay distortion

• Intrinsic parameters define the specifications of the DUT.

33

Parameters • ADC/DAC common parameters-

– full scale range (FSR) of the converter input voltage,

– gain,

– Number of bits,

– the static linearity (differential and integral),

– maximum clock rate,

– code format

• The settling time and the glitch area are relevant only to DACs. – Settling time indicates how long the reconstruction filter on the

DAC output takes to settle to its correct value.

– Glitch area is the area in the DAC output represented by glitching pulses.

As frequencies and conversion rates increase, transmission parameters become more useful than intrinsic parameters for testing.

34

Offset error in ADC and DAC transfer functions

35

DAC transfer function non-linearity errors

36

Uncertainty and Distortion in Ideal ADCs

37

•The RMS uncertainty of quantization is the RMS amplitude of

sawtooth, and is also the RMS distortion value introduced into

analogue signals transmitted through the converter.

•The sawtooth has a peak-to-peak height of 1 LSB, and the

waveform power is 1/12 LSB squared. When Q is the quantum

(LSB) voltage, then the RMS quantization distortion voltage is:

When one applies a sinusoid to the ideal ADC such that the peaks

just touch the virtual edges, then the RMS amplitude becomes:

•The RMS uncertainty of quantization is the RMS amplitude of

sawtooth, and is also the RMS distortion value introduced into

analogue signals transmitted through the converter.

•The sawtooth has a peak-to-peak height of 1 LSB, and the

waveform power is 1/12 LSB squared. When Q is the quantum

(LSB) voltage, then the RMS quantization distortion voltage is:

When one applies a sinusoid to the ideal ADC such that the peaks

just touch the virtual edges, then the RMS amplitude becomes:

Uncertainty and Distortion in Ideal

ADCs • By dividing the RMS amplitude into the RMS quantization

distortion, we get

• In dB it is

• The distortion power is independent of signal level as long as the signal is at least several LSBs in amplitude. Distortion power can approximate the signal-to-noise ratio, after an adjustment to reflect the test signal actual power.

38

Example sampled converter transfer

function.

39

DAC Transfer Function Error

• Differential non-linearity (DNL) measures the difference between the actual and linearized increments for each step in the converter transfer curve.

• DNL is defined as the root-mean-square value of the DLE function or alternatively as the DLE function value for the worst step.

• Integral non-linearity (INL) measures the difference between the reference line and the actual converter transfer function. The integral linearity error (ILE) function is computed by integrating the different DLE error vectors from the lowest code through the code of interest, in order of increasing codes..

40

• Superposition error -results from non-constant bit weights in DACs.

• Time-related errors occur in DACs because they are subject to random noise, so the real DAC has fuzzy vertical lines in its transfer function.

• Some designs have hysteresis, and some have glitching or even ringing at analog outputs due to variations in timings between currents switching off and those turning on.

• Finally, RC roll off of analog circuits brings about a settling output delay before steady-state is achieved.

41

DAC Transfer Function Error

ADC Transfer Function Error-Flash

ADC Testing Methods • Differential linearity is defined differently from DACs,

because the ADC has comparators, and the DAC does not.

• Flash A/D converters operate at 20 to 100 Mega samples/s (Ms/s), and even up to 250 Ms/s. They are code-dominated converters, so each step can have an error different from any other step.

• The input signal frequency and wave shape affect dynamic linearity.

• Normally, the flash converter decoding logic consists of a bank of flip-flops that latch the comparator thermometer code, followed by a digital priority encoder.

42

Typical flash ADC

43

Static Linear Histogram Technique

with a Triangle Wave-

• For DLE and ADC transfer function measurement we use a very slow triangle wave that ramps from just below the minimum full-scale voltage to a voltage just above the maximum full-scale voltage and back down again.

• The ramp should be slow enough so that for each code, it will be digitized roughly 10 times before the ramp moves to the next code.

• To ensure that each code of the ADC is fully tested the ramp may be repeated for up to 150 periods, the converter behaves statistically and we want its average behaviour.

• We repeat the test waveform 100 to 150 times. One problem is that the histogram ignores non-monotonic ADC codes – the sparkle and glitch codes.

44

Linear histogram of 8-bit flash ADC

45

DLE of 8-bit flash ADC.

• .

46

Derivation of Integral Linearity Error (ILE.)

• Matrix E is the ILE function of the ADC. We

compute it from the DLE vector (D), by integration

using the centre of the first step as a reference.

47

High-Frequency Histogram

Technique with a Sine Wave

• A high-frequency test for ADCs to catch the sparkle and glitch codes. At high f, triangle waves are hard to generate, so we use a sine wave, and coherently test the converter with the previously-described DSP technique.

• The dynamic linearity of the ADC worsens with increasing input slew rate, and this test detects such problems, and also lets us examine the spectral response of the converter.

48

Sinusoidal histogram of 8-bit flash ADC

49

DAC Testing Methods

• For testing the simpler and slower D/A

converters, it suffices to digitize the output

steps of the DAC, average them, and compute

the DNL and INL.

• For high performance DACs, different

approaches are used.

50

Indirect Voltage Measurement-

51

•For computing INL for a DAC, A 16-bit D/A converter with step

size of 15 parts per million (ppm), requires a measurement error

below 2 ppm.

•It is necessary to use indirect measurement to achieve this.

•DAC transfer function errors appear as vertical displacements

of the points.

• A direct measurement of the DAC transfer map is inappropriate

for these reasons:

Realizing Emulated Instruments

Using Fourier Transforms • Conventional analog measurement instruments

have the weakness of using a central ADC as the

master DC voltmeter, and for measuring AC

parameters, this ADC is preceded by a detector

52

Realizing Emulated Instruments

Using Fourier Transforms

• The detector decreases AC measurement accuracy, because analog detectors usually employ at least one non-linear function. Unfortunately, analog circuits cannot reproduce non-linear characteristics nearly as well as linear ones, so the typical accuracy is 0.1 to 1%.

• Calibration is also a problem.

• In addition, the detector has a long time constant, low-pass filter to produce a smooth DC output, which slows down testing.

• Analog detectors may take 25 to 100 periods at the lowest-rated frequency to settle to 0.1%.

53

Digitization in the conventional analog ATE

• In the DSP-based ATE instead the detector is placed after the ADC which is implemented as a software program in a DSP processor. This is called an emulated instrument.

• Since there are no longer any non-linear analog functions in the analog portion of the ATE, accuracy is improved and both AC and DC measurements.

• DSP-based ATE will synchronize the detector with the signal source. This greatly increases detection speed and eliminates ripple, because the filter in the detector is replaced with a timed integrator whose integration interval must exactly span a whole number of cycles

54

Digitization in the DSP-based analog ATE.

• In the continuous domain, DC, absolute average

AC, and true root mean square (RMS)

measurements are given by the following

equations,

55

ADC-Test Procedures

• Should largely be functional in accordance with device specifications .

• Emphasis should be given- – Throughput and speed of test

– Purity of analog waveforms

– Equal quantization of digital increments

– Temperature coefficients and conversion stability

– Settling time

However when ADC/DAC are used with complex systems observability and controllability gets restricted and alternative testing procedures are to be incorporated.

56

Alternative testing Procedures-concepts

• The number of steps in the digital code is the resolution of the converter.

• As one digital code word represents a discrete interval of analog input signal this interval is usually referred to as quantization step QS with amplitude

• Where n is number of bits in conversion and FS is the full scale reading of analog signal.

• Maximum quantization error-

( assuming perfect converter)

57

FSQSn2

1

QSQE2

1max

Imperfections- • Offset error-constant shift of the complete transfer

characteristics • Gain error- which gives incorrect full scale value Non linearity- which may be divided into Integral non linearity – global measure of non linearity. Defined as maximum

deviation of the actual transfer characteristics from straight line drawn between zero and full scale point of the ideal converter.

Differential non linearity-Defined as maximum deviation of any analog voltage changes for one bit change in digital representation from ideal analog change of

*All these errors plus noise ,settling time ,temperature coefficient, nonlinearity

(all within permissible limits) etc are responsibility of vendor. *OEM- tests for gross errors for off shelf products.

58

FSn2

1

Testing of particular analog circuits

59

• Filters- Design is one of the area which is fully automated.

• CAD software can be used.

– Butterworth- giving maximally flat response

– Chebyshev -giving steepest transition from pass band to stop band but at the expense of ripple in the pass band.

– Bessel-giving maximally flat time delay.

– Elliptic- giving the sharpest knee at the edges of the pass band.

Non conventional ways

• Research in progress-

– Impulse or transient testing.

–Fault free signature recorded.

• Hard faults- DC fault dictionary approach-

• Response is analyzed under fault free condition

and under some set of faults.

• Supply current monitoring –another overall

test strategy.

60

Filter testing

• Digital modeling

• Functional K-map modeling

• Logical decomposition.

• No strategies for switched capacitor filter.

• Transient response –most valid test.

• Use of pseudorandom sequences and

impulse response relationship.

61

FILTER- classification

62

Analog Filters

passive

Discrete

Active

Continuous time linear filters

Chebyshev Butterworth Bessel Elliptical & Others

LPF HPF Band pass Band stop

Switched capacitor

MEMORY TESTING • The fastest memory in the hierarchy is the static RAM in

the microprocessor cache, dynamic RAM is the off-chip main memory, and Winchester moving head disk drive technology is at the bottom of the hierarchy.

• The widespread deployment of virtual memory, and the increase in typical computer memory sizes to several Mega-bytes has only been possible because of increasing memory chip density and cost-effective memory testing.

• Virtual memory has freed tens of millions of programmers from the need to code software in assembly language, and from the need to carefully construct software overlays

63

Families of memory circuits

64

Memory ICs

Non volatile ROM Volatile RAM

Mask programmable

Field

programmable

Bipolar

Schottky TTL

unipolar

NMOS CMOS

Bipolar unipolar

Schottky

TTL ECL

I2L

NMOS CMOS

Static

Dynamic

Bipolar Unipolar

Ferroelectric

Bipolar

unipolar

I2L

NMOS CMOS

NMOS CMOS

The significant types of semiconductor memory are

1. Dynamic Random Access Memory (DRAM) has the highest possible density but a slow access time of 20 ns. Bits are stored as charge on a single capacitor, but the memory must be refreshed, typically every 2, 4, or 6 ms, if information is not to be lost.

65

2. Static Random Access Memory (SRAM) has the fastest possible speed, with a 2 ns access time. Bits are stored in cross-coupled latches, and the memory need not be refreshed.

3. Cache DRAM (CDRAM) combines both SRAM and DRAM on the same chip, in order to accelerate block transfer between the SRAM cache and the slow DRAM.

4. Read-Only Memories (ROMs) have every bit content programmed by the presence or absence of a transistor at manufacturing time, and do not lose information when power is shut off.

66

The significant types of semiconductor memory are

5. Erasable, Programmable Read-Only Memories (EPROMs) are ROMs that can be programmed in the field. The entire contents is erased by applying ultraviolet light, and then the EPROM can be reprogrammed.

6. Electrically Erasable, Programmable ROMs (EEPROMs) are programmable in the field, and words in them can be selectively erased by electrical means.

67

The significant types of semiconductor memory are

68

Programmable Logic Devices

PLA PROM

PAL

69

What is Programmable Logic

• Programmable Logic Controllers (PLC)

• Programmable Logic Devices

– Field Programmable Gate Array (FPGA)

– Application Specific Integrated Circuit (ASIC)

– System-on-chip (SOC)

– Complex PLD (CPLD)

– others

Programmable Logic Devices

• Provide ready means for random combinational

logic .

• Used in DSP processors, for logic partition

with complex logic ICs.

• PLA-AND –OR Programmable planes.

• The programmed output function is dependent

on presence or absence of connections at the

cross points

70

71

Programmable Logic Devices

• Categories of prewired arrays (or field-

programmable devices):

– Fuse-based (program-once)

– Non-volatile EPROM based

– RAM based

• Recently:

– VPGA (Via-Programmable Gate Array)

– Structured ASIC

The causes of faults in PLDs

72

•Incomplete specifications

•Design and Implementation Errors (Common mode)

•Unexpected or unanticipated combinations of valid

operating states.

•Unintended interactions

•Unknown defects in tools (design or verification)

Complexity

• Fault modeling not possible with stuck at faults due to cross points.

• No re-convergent fan-outs topology-backtracking in PODEM becomes complex and time consuming.

• High fan-in militates use of random and pseudorandom test patterns.

– Hence it becomes more appropriate to consider self testing architecture.

– Programmed output function is dependent upon presence or absence of connections at the cross points in AND /OR arrays. Therefore cross point faults are to be modeled.

73

– Growth faults- A missing connection in the AND array, causing a

product term to loose a literal and hence to have twice as many

min terms as it should have

– Shrinkage faults-An erroneous extra connection in the And array

causing a product term to have an extra literal and loose half of

its required min terms

– Disappearance fault- A missing connection in the OR array

causing one complete product term to be lost in one or more

outputs

– Appearance fault- Erroneous extra connection in the OR array

causing an additional literal or product term to appear in one or

more OR outputs.

Cross point faults in PLAs are classified as

74

Hardware/Software Differences

• Most PL cannot be changed once “burned”

(programmed). FPGAs can be programmed on-the-fly.

• Software execution is serial – one instruction after

another.

• PL execution is parallel – multiple simultaneous signals

and processes.

• PL designed, verified, tested by engineers.

75

Current PL Process

• Design from system requirements.

• Functional Simulation

– Includes “corner cases”

• Testing (unit and system)

– Simulation and unit test usually performed by

design engineer

• May perform code coverage measurement

• Verification takes 70% of design task

76

ATPG program considerations-

• Individual possible cross point faults and the

determination of appropriate input test

vectors to detect their existence if present.

• The relationship between the types of cross

point fault and the effect on the total test

vector requirements.

77

PLA Test Method

• Concurrent- totally self checking, concurrent

test by secure PLA, series of checkers

• Self testing- PLA with BILBO, self testing

PLA

• Test generation- PLA with universal self test,

function independent testing, parallel testable

PLA, syndrome testable PLA

78

Disadvantages of ATPG

• Multiple cross point faults are not specially considered.

• Test patterns are function dependent not generalized.

• If PLA is part of complex VLSI circuit , controllability and observability of PLA may not yield access to complete fault testing .

• PLA size grows towards 50 or more inputs and hundreds of product terms ,so number of cross points increases to make ATPG program consume CPU time .( execution time )

79

Other approaches of ATPG include

80

• Syndrome and other offline output compression

test method-

-Powerful tool if all faults result in grow and shrinkage faults

-Disadvantage- full syndrome count of each output is to be determined

which is difficult due to exhaustive input set of test vectors

• Online concurrent error detection by addition of

checker circuits using Berger codes.

• Additions to the basic PLA architecture to

provide some built –in self test facility to ease

the testing problem

Offline testable PLA Designs

The Principle needs for Offline PLA Designs include-

– Controllability of 2x 2nXiand Xi’ input literals to the AND array

– Separate controllability and observability for each k product line

– Some observability AND /OR check of connections in the OR

array which gives final output.

Principle idea-

– Alter normal xi input decoders to modify xi and xi input literals

to AND array

– add scan path shift registers to shift specific 0 and 1 signals in to

AND or OR arrays to scan out data

– -Add additional product columns to AND array/or OR array to

provide parity check or to check other resources

81

Test procedure

1. Set all stages in the shift register to logic 0 –to disable all product lines and

check that both parity outputs are zero. This checks stuck at-1 fault in parity check circuits.

2. Shift 1 to first stage .to select first product column only leading to OR array. Set all Xi inputs to zero Y1,Y2 to 10 which will give 1 on all 2nd And input literals. Check that odd parity check on column 1 is 1 by observing output Z2. If no cross point is present, there will be only parity bit in OR array to give output Z2=1

3. Repeat 1..2 for Xi inputs set to 1 and Y1,Y2 set to 01 which will give 1 on all AND array inputs.

4. Repeat 2 and 3 for each of the remaining product columns by shifting 1 along shift register to check all OR matrix cross points.

5. Set all stages in the shift register to 1 to energize all product columns With Y1Y2 =01 , set X1 to 0 and all remaining inputs to 1 to deactivate input X1row in the AND array but activate all remaining 2n-1 rows. With this loss of all p(odd programmed) cross points in the first row of AND array including parity check. Check for Z1=1

6. Repeat Y1Y2=01 X1=1 and all remaining inputs at 0 to deactivate Xi’ row in the And array checking Z1=1 as in step 5.

7. Continue till all done.

82

Online testable PLA Designs-variants

83

PLA of KhaKbaz and McCluskey- •Each pair of literal lines in the AND array forms a two rails code

•Product of signals on k product lines are arranged such that only one product line is

energized at a time in normal mode

•Additional output lines are added to OR array to detect error.

Built in PLA self Test •Test procedure to scan chosen initial pattern into the product line shift register and

into the input decoder shift register and apply a clock to load data into AND array.

•AND and OR parity checks and two additional AND product terms are fed back to

feedback value generator to detect cross point faults.

•On presence of fault future test pattern is modified so as to complete the scheduled

checks.

•Draw back-multiple fault masking.

Other BIST methods- Parallel testable PLA architectures, divide and conquer

testable architectures, specialized layout testable PLA architectures, random

pattern testable PLA architectures.

Other programmable logic devices

84

•Programmable Array Logic Devices(PALs)

•Programmable Logic Sequencers(PLSs)

•Programmable Diode Matrix Devices-now obsolete(PDMs)

•Erasable Programmable Logic Devices.(EPLDs)

•Maskable Programmable Logic Devices.(MPLDs)

•Gate Array Logic Devices (GALs).

•Latest term-CPLD-complex Programmable Logic Devices –applies

to large PLAs or FPGAs rather than small structures listed above.

PLD

85

The test pattern generators essentially look at-

• Individual possible cross point faults and determination of

appropriate test vectors to detect their existence.

• Relationship between the types of cross point fault and the

effect on total test vector requirement.

Disadvantages of ATPG-

• Multiple cross point faults are not specially considered

• Test pattern are dependent on dedicated functions in PLAS

• If PLA is part of complex VLSI circuit controllability and

observability are restricted.

• As PLA size grows number of cross point faults increases and

fault modeling is difficult. More CPU time is consumed in

fault modeling and execution.

Memory Density and Defect Trends

• In memory technology, the number of bits/chip

quadruples roughly every 3.1 (or ) years. The bit

count per chip continues to increase exponentially,

and this causes the memory price to decrease

exponentially.

• Exponential density increase implies exponential

decrease in area per memory cell, which implies an

exponential decrease in the size of the capacitor used

in a DRAM to remember a single bit.

86

Test Time Complexity

• Memory tests should deliver the best fault coverage possible given a certain test time.

• The major change in memory testing is that tests are now based on fault models, which is an abstraction of the error caused by a particular physical fault(s.)

• Purpose of fault model is to simplify testing and reduce testing time.

• Memory tests with high fault coverage do not necessarily test a high percentage of the defects (defect coverage) occurring in production.

87

Faults

• A system is defined here either as a purely electronic system or a mixed electronic, electromechanical, chemical, and photonic device system. Such systems are coming into common use through micro electro-mechanical system (MEMS) technology that combines all of the above-listed devices on a single chip.

• A system failure occurs when system behavior is incorrect or interrupted. Failures are caused by errors, which are manifestations of faults in the system.

• A fault is present in the system when there is a physical difference between good and incorrect (failing or bad) system behavior, but some time may elapse before a fault causes a detectable system error.

88

Fault Manifestations

• Permanent. These faults are caused by the

following mechanisms, and can be – modeled with a fault model, since they will exist

indefinitely.

– Bad Electrical Connections (missing or added)

– Broken Components (this could be an IC mask defect or a silicon-to-metal or a metal-to-package connection problem)

– Burnt-Out Chip Wire

– Corroded Connection Between Chip and Package,

– Chip Logic Error

89

• Non-Permanent. Non-permanent faults are present only part of the time, and occur randomly.

• Transient faults- Cosmic Rays , ionized Helium atoms , Air Pollution (causes temporary wire short or open)

• Intermittent faults-Loose Connections , Aging Components (logic gate delays change and relative signal arrival times therefore change) , Hazards and Races in Critical Timing Paths (from bad design), Resistors, Capacitors, and Inductors Vary (causing timing faults) ,Physical Irregularities (a narrow wire, which causes a high resistance connection), Electrical Noise (causes memory cells to change state.

90

Fault Manifestations

Memory Test Levels-Functional test

91

• Chip testing must be done with a memory fault

model to make it economical.

• Memory board testing must test the memory

array, the refresh logic, the error detection and

correction logic, the board selector hardware,

and the memory board controller.

• Electrical parametric tests are also important

for memory systems.

92

Memory Test Levels

Memory Fault Modeling

• Logical faults-physical modeling

• Behavioral or black-box model of a memory system models the memory as a state machine, which has states for all possible combinations of the memory contents.

• An electrical model, which allows detailed fault localization.

• The geometrical model of a memory implies complete knowledge of the chip layout

93

Functional memory model.

94

Simplified Functional Memory Model

95

Once a fault is

detected by functional

testing of a memory chip,

the only option is to discard

the faulty chip and replace it

with another one.

Reduced Functional Faults

96

Functional RAM Fault Models

97

Stuck-At Faults . The stuck-at-fault (SAF) is one

in which the logic value of a cell

or line is always 0 (SA0) or always 1 (SA1). The

cell/line is always in the faulty state and cannot be

changed.

•Transition Faults-The transition fault (TF) is a

special case of the SAF, in which a cell fails to make

a (up) 0 to 1 transition or a to 0) transition when it

is written.

State transition diagram model for stuck and transition faults

98

Functional RAM Fault Models

99

•Coupling Faults-A coupling fault (CF) means that a

transition in memory bit j causes an unwanted change in

memory bit i. The 2-coupling fault is a coupling fault

involving two cells.

•Inversion Coupling Faults. An inversion coupling

fault (CFin) means that a or transition in cell j inverts

the contents of cell i. Cell i is said to be coupled to cell j,

which is the coupling cell.

•Dynamic Coupling Faults. A dynamic coupling fault

(CFdyn) occurs between cells in different words. A read

or write operation on one cell forces the contents of

the second cell either to 0 or 1.

Functional RAM Fault Models

100

•Bridging Faults. A bridging fault (BF) is a short circuit

between two or more cells or lines. It is a bidirectional

fault, so either cell/line can affect the other cell/line. A 0

or 1 state of the coupling cell causes the fault, rather than

a coupling cell transition.

•State Coupling Faults. The state coupling fault (SCF)

is where the coupling cell/line j is in a given state y that

forces the coupled cell/line i into state x.

•Address Decoder Faults.- An address decoder fault

(AF) represents an address decoding error, in which we

assume that the decoder logic does not become sequential

101

102

•Neighbourhood Pattern Sensitive Coupling Faults. In a

pattern sensitive fault (PSF), the content of cell i (or the ability of

cell i to change) is influenced by the contents of all other memory

cells, which may be either a pattern of 0s and 1s or a pattern of

transitions.

The neighbourhood is the total number of cells involved

in this fault, where the base cell is the cell-under-test, and the

deleted neighbourhood is the neighbourhood without the base cell.

Functional RAM Fault Models

RAM testing

103

•Do not contain fixed data.

•No initial input output relationship to test and confirm

operation.

•Two aspects are considered

1.Some initial tests which attempt to confirm that each

individual RAM cell is capable of being written to

logic 1 or 0 and read from.

2.Some online test to confirm overall RAM structure is

continuing to operate correctly.

Faults in RAMs

• One or more bits in the memory stuck at 0 or 1.

• Coupling between the cells that a transition from

0 and 1 causes a change in another call-pattern

sensitive fault.

• Unwanted charges in dynamic RAMs resulting

into changes in data.

• Inadequate charging leading to data changes.

• Temporary faults caused by external interference.

104

RAM testing

• Marching patterns

• Walking patterns

• Diagonal patterns

• Galloping patterns

• Nearest neighbor pattern

105

Functional RAM Testing with March Tests

106

107

Testing RAM Neighbourhood Pattern-

Sensitive Faults •Assumptions and Testing Requirements. We always assume that read operations

of memory cells are fault-free in the NPSF testing algorithms in order to make

them practical.

Detection Of Cell (2, 1) Multiple

Address Decoder Faults.

108

the necessary condition to detect and locate a static neighbourhood pattern sensitive fault (SNPSF): We must apply 2K combinations of 0s and 1s to the k cell neighborhood, and verify by reading each cell that each pattern can be stored

Online RAM testing

• Due to shrinking geometries memories are more susceptible to alpha particle interference, the storage capacitance per cell is extremely small and errors due to external interference become more frequent.

• Soft faults detection and correction is important .

• Hamming codes are extensively used to online RAM testing.

• For every m bit data addition of k check bits is required such that 2k≥ m+k+1

109

110

ROM Testing •ROM testing differs from RAM testing, in that the

correct data that the ROM should contain is already

known.

•The SAF model used for ROMs is sometimes a

restricted SAF model in that only undirectional

SA faults can occur, meaning that any given chip will

either have only SA0 faults or only SA1 faults.

This is based on a ROM fault model where only

“opens” occur, which are missing connections resulting

in either all SA0 faults or all SA1 faults.

ROM Testing

• Test mode resources-

– Scan in of test data to shadow register and scan out

of test response data.

– Replacement of normal PROM array output by

data bits loaded into shadow register.

– Loading of normal PROM array outputs back into

shadow register for subsequent scan out and

verification.

111

ROM Testing

112

•The preferred ROM testing method is to cycle

the ROM through all of its addresses bit stream

at the ROM outputs using a linear feedback shift

register (LFSR) in the automatic test equipment

(ATE).

This system is based on cyclic redundancy codes

(CRCs).

top related