0740-7475/03/$17.00 © 2003 IEEE Copublished by the IEEE CS and the IEEE CASS September–October 2003 17
IC MANUFACTURING TEST is changing, with an
increased emphasis on at-speed testing to maintain test
quality for larger, more complex chips and new fabrica-
tion processes. Growing gate counts and increasing tim-
ing defects with small fabrication technologies force
improvements in test quality to maintain the quality level
of chips delivered to customers after testing. Improving
the stuck-at test coverage alone still might leave too many
timing-based defects undetected to reach quality goals.
Therefore, at-speed testing is often necessary.
Scan-based ATPG solutions for at-speed testing ensure
high test coverage and reasonable development effort.
This article explores applying at-speed scan testing. We
introduce new strategies to optimize ATPG to apply spe-
cific clock sequences that are valid with the circuit oper-
ation. We also use these strategies to have ATPG generate
tests that use internal clocking logic. Furthermore, we
combine the same technique with programmable phase-
locked loop (PLL) features to support applying high-fre-
quency, at-speed tests from internal PLLs. As a result, we
can base the application of precise clocks for at-speed
tests on on-chip clock generator circuitry instead of testers.
Motivation for at-speed scan testingIC fabrication processes produce a given number of
defective ICs. Many companies have gotten by with sta-
tic stuck-at scan testing and a limited
amount of functional test patterns for at-
speed testing to uncover these defective
ICs. They often supplement these tests
with IDDQ tests, which detect many types
of defects, including some timing-related
ones.1 In the past, these tests effectively
screened out enough of the limited num-
ber of timing-related defects. However,
at the smaller geometry sizes of today’s
ICs, the number of timing-related defects is growing,2 a
problem exacerbated by the reduced effectiveness of
functional and IDDQ testing. Functional testing is less
effective, because the difficulty and time to generate
these tests grows exponentially with increasing gate
counts. The electrical properties of 0.13-micron and
smaller technologies have caused many companies to
rely less on IDDQ tests, because a defective device’s cur-
rent will be difficult to distinguish from the normal qui-
escent current. Some have abandoned IDDQ tests
altogether for these small-geometry devices.
The main force behind the need for at-speed testing is
the defect characteristics of 0.13-micron and smaller tech-
nologies that are causing more timing-related defects.3 For
example, one study on a microprocessor design showed
that if scan-based at-speed tests were removed from the
test program, the escape rate went up nearly 3%.4 This was
on a chip with a 0.18-micron feature size.
We can use functional tests to provide at-speed tests,
but the functional test development problem is explod-
ing exponentially with chip growth. For a different micro-
processor design, the development effort to create the
functional test set took three person-years to complete.5
Furthermore, these tests consumed a large percentage
of the tester memory, and the test time to run them was
significant. Because the effort to create effective at-speed
High-Frequency, At-SpeedScan TestingXijiang Lin, Ron Press, Janusz Rajski, Paul Reuter,Thomas Rinderknecht, Bruce Swanson, and Nagesh TamarapalliMentor Graphics
Editor’s note:At-speed scan testing has demonstrated many successes in industry. Onekey feature is its ability to use on-chip clock for accurate timing in theapplication of test vectors in a tester. The authors describe new strategieswhere at-speed scan tests can be applied with internal PLLs. They presenttechniques for optimizing ATPG across multiple clock domains and proposemethodologies to combine both stuck-at-fault and delay-test vectors into aneffective test suite.
—Li-C. Wang, University of California, Santa Barbara
functional-test patterns is daunting, more companies are
moving to at-speed scan-based testing.
Logic BIST can perform at-speed test but is usually
combined with ATPG to get high enough coverage. The
value of logic BIST is that it provides test capabilities,
such as secure products or in-system testing, when tester
access is impossible. Logic BIST uses an on-chip pseudo-
random pattern generator (PRPG) to generate pseudo-
random data that loads into the scan chains. A second
multiple-input shift register (MISR) computes a signature
based on the data that is shifted out of the scan chains.
There is reasonable speculation that supplementing
a high-quality test suite in production with logic BIST
might detect some additional unmodeled defects. At-
speed logic BIST is possible using internal PLLs to pro-
duce many pseudorandom at-speed tests. However,
employing this test strategy requires additional deter-
ministic at-speed tests to ensure higher test quality. This
test strategy also requires adhering to strict design rule
checking and using on-chip hardware to increase testa-
bility and avoid capturing unknown values. Some spec-
ulate that logic BIST will improve defect coverage
because it will detect faults many times. However,
although it does provide multiple detections, they occur
only at the fault sites that are random-pattern testable.
Also, although logic BIST may be useful for transition
fault testing, the low probability of sensitizing critical
paths with pseudorandom vectors makes logic BIST
unsuitable for path delay testing.
Scan-based tests and ATPG provide a good general
solution for at-speed testing. This approach is gaining
industry acceptance and is a standard production test
requirement at many companies. However, scan-based,
at-speed ATPG grows the pattern set size significantly.
This is because it is more complicated to activate and
propagate at-speed faults than stuck-at faults. Because
of this complexity, compressing multiple at-speed faults
per pattern is less efficient than for stuck-at faults.
Fortunately, embedded compression techniques can
support at-speed scan-based testing without sacrificing
quality. When using any kind of embedded compres-
sion solution, however, engineers must take care not to
interfere with the functional design, because core logic
changes can significantly affect overall cost.
Moving high-frequency clocking fromthe tester to the chip
In the past, most devices were driven directly from an
externally generated clock signal. However, the clock
frequencies that high-performance ICs require cannot
be easily applied from an external interface. Many
designs use an on-chip PLL to generate high-speed inter-
nal clocks from a far slower external reference signal.
The problem of importing high-speed clock signals into
the device is also an issue during at-speed device test-
ing. It is difficult and costly to mimic high-frequency
(PLL) clocks from a tester interface. Studies have shown
that both high-speed functional and at-speed scan tests
are necessary to achieve the highest test coverage pos-
sible.4 To control costs, more testing will move from cost-
ly functional test to at-speed scan test. Some companies
have already led the way to new at-speed scan testing by
using on-chip PLLs.6 Although this is a new idea, it is gain-
ing acceptance and use in industry designs. These tech-
niques are useful for any type of scan design, such as
mux-DFF or level-sensitive scan design (LSSD).
Because a delay test’s purpose is to verify that the cir-
cuitry can operate at a specified clock speed, it makes
sense to use the actual on-chip clocks, if possible. You
not only get more accurate clocks (and tests), but you
also do not need any high-speed clocks from the tester.
This lets you use less-sophisticated, and hence cheap-
er, testers. In this scenario, the tester provides the slow-
er test shift clocks and control signals, and the
programmable on-chip clock circuitry provides the at-
speed launch and capture clocks.
To handle these fast on-chip clocks, we have
enhanced ATPG tools to deal with any combination of
clock sequences that on-chip logic might generate.6 The
ATPG user must simply define the internal clocking
events and sequences as well as the corresponding
external signals or clocks that initiate these internal sig-
nals. That way the clock control logic and PLL, or other
clock-generating circuitry, can be treated like a black
box for ATPG purposes, and the pattern generation
process is simpler.
At-speed test methodologyThe two prominent fault models for at-speed scan
testing are the path-delay and transition fault models.
Path delay patterns check the combined delay through
a predefined list of gates. It is unrealistic to expect to test
every circuit path, because the number of paths increas-
es exponentially with circuit size. Therefore, it is com-
mon practice to select a limited number of paths using
a static timing-analysis tool that determines the most crit-
ical paths in the circuit. Most paths begin and terminate
with sequential elements (scan cells), with a few paths
having primary inputs (PIs) for start points or primary
outputs (POs) for endpoints.
Speed Test and Speed Binning for DSM Designs
18 IEEE Design & Test of Computers
The transition fault model represents a gross delay
at every gate terminal. We test transition faults in much
the same way as path delay faults, but the pattern gen-
eration tools select the paths. Transition fault tests tar-
get each gate terminal for a slow-to-rise or slow-to-fall
delay fault. Engineers use transition test patterns to find
manufacturing defects because such patterns check for
delays at every gate terminal. Engineers use path delay
patterns more for speed binning.
At-speed scan testing for both path-delay and transi-
tion faults requires patterns that launch a transition from
a scan cell or PI and then capture the transition at a scan
cell or PO. The key to performing at-speed testing is to
generate a pair of clock pulses for the launch and cap-
ture events. This can be complicated because modern
designs can contain several clocks operating at different
frequencies.
One method of applying the launch and capture events
is to use the last shift before capture (functional mode) as
the launch event—that is, the launch-off-shift approach.
Figure 1 shows an example waveform for a launch-off-shift
pattern for a mux-DFF type design; you can apply a simi-
lar approach to an LSSD. The scan-enable (SE) signal is
high during test mode (shift) and low when in functional
mode. The figure also shows the launch clock skewed so
that it’s late in its cycle, and the capture clock is skewed so
that it’s early in its cycle. This skewing creates a higher
launch-to-capture clock frequency than the standard shift
clock frequency. (Saxena et al.7 list more launch and cap-
ture waveforms used by launch-off-shift approaches.) The
main advantage of this approach is simple test pattern gen-
eration. The main disadvantage (for mux-DFF designs) is
that we must treat the SE signal as timing critical. When
using a launch-off-shift approach, pipelining an SE within
the circuit can simplify that SE’s timing and design.
However, the nonfunctional logic related to operating SE
at a high frequency can contribute to yield loss.
An alternate approach called broadside patterns
uses a pair of at-speed clock pulses in functional mode.
Figure 2 shows an example waveform for a broadside
pattern. Each clock waveform is crafted to test only a
subset of all possible edge relationships between the
same and different clock domains. The first pulse initi-
ates (launches) the transition at the targeted terminal,
and the second pulse captures the response at a scan
cell. This method also allows using the late and early
skewing of the launch and capture clocks within their
cycles. The main advantage of this broadside approach
is that the timing of the SE transition is no longer criti-
cal, because the launch and capture clock pulses occur
in functional mode. Adding extra dead cycles after the
last shift can give the SE additional time to settle.
Logic BIST and ATPG test can generate launch-off-
shift and broadside patterns. Logic BIST includes clock-
control hardware to provide at-speed clocks from a PLL.
The clocks’ sequence is usually constructed in a BIST
approach such that the clocks that control a higher
amount of logic will be pulsed more often during the
pseudorandom patterns. When using deterministic test
pattern generation, an ATPG tool can perform the analy-
sis to select the desired clock sequence on a per-pattern
basis to detect the specific target faults. ATPG can use
programmable PLLs for at-speed clock generation if the
PLL outputs are programmable. Both logic BIST and
ATPG generally shift at lower frequencies than the
fastest at-speed capture frequencies to avoid power
problems during shift. In addition, a fast shift frequen-
cy would force high-speed design requirements for the
scan chain. It is the timing from launch to capture that is
important for accurate at-speed testing.
Controlling complex clock-generatorcircuits
To properly use high-frequency clocks that are gen-
erated on chip, engineers must address several issues.
19September–October 2003
Laun
ch
Cap
ture
Shift Shift Lastshift
Capture Shift
Clock
Scanenable
(SE)
Figure 1. Launch-off-shift pattern timing.
Laun
ch
Cap
ture
Shift Shift Deadcycle
Shift
Clock
SE
Figure 2. Broadside-pattern timing.
Sequences of multiple on-chip (internal) clock pulses
are necessary to create the launch and capture events
needed for at-speed scan patterns. Engineers can create
them using various combinations of off-chip (external)
clocks and control signals. To generate an appropriate
internal clock sequence, it is inefficient to have an ATPG
engine work back through complex clock generators to
determine the necessary external clock pulses and con-
trol signals for every pattern. Furthermore, you cannot
let the ATPG engine choose the internal clock sequences
without regard for the clock-generation logic, because
the ATPG engine might use internal clock sequences
that cannot be created on chip.
To solve these issues, we have implemented an inno-
vative ATPG approach that lets you specify legal clock
sequences to the tool using one or more named-capture
procedures. These named-capture procedures describe
a sequence of events grouped in test cycles. Included
in each procedure is the way the internal clock
sequence can be issued along with the corresponding
sequence of external clocks or events (condition state-
ments) required to generate it. Using these procedures,
you can specify all legal clock sequences needed to test
the at-speed faults to the tool. The ATPG engine can per-
form pattern generation while only considering the
internal clocks, their legal sequences, and the internal
conditions that must be set up to produce the clock
sequence. The final scan patterns are saved using the
external clock sequences by automatically mapping
each internal clock sequence to its corresponding exter-
nal clock/control sequence. Using internal and exter-
nal clock sequences (plus control signals) is efficient
for behaviorally modeling the clock-generation logic so
that ATPG can create at-speed scan pat-
terns by using the on-chip clocks. This
method supports all types of clock gen-
eration logic, even the logic treated as a
black box in the ATPG tool.
Figure 3 shows a simple example of a
programmable PLL used to generate mul-
tiple clock pulses from a single off-chip
clock. The programmability is usually a
register-controlled clock-gating circuit
that gates the PLL outputs.
Figure 4 presents the waveform show-
ing the relationship between the external
and internal clocks. The example shows
a mux-DFF design with two internal
clocks and two scan clocks. A similar
waveform would exist for an LSSD. We
can express this waveform to the ATPG tool as a single
named-capture procedure and its associated timing def-
initions for each cycle (timeplates). We use it to
describe a single clock sequence, as well as the timing
it uses, to the ATPG tool. If the PLL supports other clock
sequences that are necessary to detect at-speed faults,
we can write other named-capture procedures for them.
In the named-capture procedure, we can mark
some test cycles as slow. These cycles are not available
for at-speed launch or capture cycles. In other words,
we can mark the cycles that are not valid for at-speed
fault simulation detection. In some designs, the PLL
control signals are not supplied externally. Instead,
engineers design them using internal scan cells. To
avoid incorrect logic values, we can also use condition
statements to force ATPG to load desired values in
those scan cells.
Often, clock generation circuits require many exter-
nal cycles to produce several internal pulses. Some cir-
Speed Test and Speed Binning for DSM Designs
20 IEEE Design & Test of Computers
System_clk
ICClk1
Clk2Begin_acScan_en
Scan_clk1Scan_clk2
PLLPLL
controlDesign
core
InternalExternal
Figure 3. Phase-locked loop (PLL) clock generation with
internal and external clocks.
240 ns
System_clk
Scan_en
Begin_ac
Scan_clk1
Scan_clk2
Clk1
Clk2
Slow Fast Fast Slow
Figure 4. Waveform of clock-generation logic.
cuits require more than 20 external cycles to produce
two or three internal clock pulses. The internal and
external modes let the ATPG engine efficiently perform
pattern generation without having to simulate the large
number of external cycles. The number of internal and
external cycles within a named-capture procedure can
vary as long as the total times for internal and external
are equal. This can dramatically improve pattern gen-
eration for these types of circuits.
Nonintrusive macrotesting techniques provide a
method of applying at-speed test sequences to embed-
ded memories.8 These techniques use the circuit’s scan
logic to provide the desired pattern sequences, such as
march sequences, at the embedded memory. We can
also use named-capture procedures to control at-speed
clock events during macrotesting.
Merging at-speed patterns with stuck-at patterns
With increasing speed-related defects, it is necessary
to have at-speed test patterns such as transition patterns
in addition to the usual stuck-at patterns. The number
of transition patterns typically ranges from about three
to five times the number of stuck-at test patterns.
Transition patterns, however, also detect a significant
percentage of stuck-at faults. Thus, to minimize the over-
all test pattern count, we can merge the pattern sets for
multiple fault models.
Figure 5 illustrates the stuck-at test coverage profile
for a half-million gate design with 45,000 scan cells. The
figure shows that 2,000 patterns are required to achieve
98.84% stuck-at coverage. For this design, approximately
10,800 patterns, or about five times the number of stuck-
at test patterns, are required to achieve broadside tran-
sition fault coverage of 87.86%. Assume that the tester
memory capacity can store only 6,000 patterns. Then,
as Figure 5 shows, one solution is to apply the original
2,000 stuck-at test patterns followed by a truncated tran-
sition pattern set composed of 4,000 patterns, yielding
transition test coverage of 83.44%.
The end test quality should be better with the test pat-
tern set composed of stuck-at and transition patterns com-
pared to only stuck-at patterns. However, we can obtain
a more efficient compact pattern set because the transi-
tion patterns detect a significant percentage of stuck-at
faults as well. In fact, for this example, stuck-at fault sim-
ulation of the 4,000 transition patterns results in 93.07%
of stuck-at faults detected by the transition patterns.
As Figure 6 illustrates, only 1,180 extra stuck-at pat-
terns are required to obtain final stuck-at coverage of
98.84%. Thus, by first generating transition patterns and
fault-simulating them for stuck-at faults, we can obtain
a transition test coverage of 83.44% and a stuck-at test
coverage of 98.84% with 5,180 total patterns instead of
6,000 patterns.
Figure 7 illustrates a general pattern generation flow
for multiple-fault models, to achieve a compact pattern
set. As Figure 7 shows, if path delay testing is desired,
then the pattern generation effort can commence with
path delay ATPG. We can simulate the resulting path
delay pattern set against transition faults to eliminate
the transition faults that the path delay pattern set
detects from our target fault list. If the resulting transi-
tion test coverage has not reached the target coverage,
we can perform ATPG for the remaining undetected
transition faults. Simulating the path delay patterns and
the transition patterns detects many of the stuck-at
faults. Furthermore, we perform ATPG for the unde-
21September–October 2003
Cov
erag
e (%
)
2,000 4,0000 6,000
98.84%
83.44%
100
50
No. of patterns
0
Stuck-atTransition
Figure 5. Stuck-at patterns followed by
transition patterns.
Cov
erag
e (%
)
4,0002,0000 5,180 6,000
93.07%83.44%
98.84%100
50
0
No. of patterns
Stuck-atTransition
Figure 6. Transition patterns with supplemental
stuck-at patterns.
tected stuck-at faults if the target stuck-at coverage is not
reached. This methodology generates a compact pat-
tern set across multiple fault models.
Even with all the software compression techniques,
we might not be able to compress the pattern set
enough to fit them in the ATE memory. In those situa-
tions, a novel hardware compression technique called
embedded deterministic test (EDT) provides a dramatic
reduction in test data volume. With this technique, we
can comfortably store the pattern set for several fault
models. Thus, we can achieve high test quality while
simultaneously containing the test costs.9
Case studyWe use an industrial design to demonstrate how to
apply the named-capture procedures. Specifically, we
generate at-speed test patterns and describe a method-
ology to fit stuck-at and at-speed patterns into the tester
memory without requiring multiple loads of the test
data. The design has
� 16 scan chains;
� 70,178 scan cells;
� 358 nonscan cells;
� five internal clocks;
� 1,836,403 targeted stuck-at faults; and
� 2,196,668 targeted transition faults.
This chip was designed with an embedded pro-
grammable PLL that generates clocks for at-speed test-
ing. The tester can hold no more than 15,000 test pat-
terns. The test strategy requirements are as follows:
� The final test set will include two subtest sets, one for
testing the stuck-at faults and the other for testing the
at-speed faults.
� The highest priority is to get the best possible test cov-
erage for the stuck-at faults. This means the test set
for stuck-at faults cannot be truncated if the test data
volume in the final test set is larger than the tester
memory.
� The transition fault model detects timing-related
defects.
� The test coverage for the transition faults must be as
high as possible, provided the final test set fits into
the tester memory.
� The broadside launch-and-capture method must be
used to generate at-speed patterns.
� All values at PIs must remain unchanged, and all POs
are unobservable while applying the test patterns for
the transition faults. This is necessary for this exam-
ple because the tester is not fast enough to provide
PI values and strobe POs at speed.
During test generation for the stuck-at faults, we use
both clock domain analysis and multiple clock-com-
pression techniques to generate the most compact test
set. The ATPG tool generates 7,557 test patterns that
achieve 96.56% stuck-at test coverage.
When generating test patterns for the transition
faults, we target only the faults in the same clock
domain. The ATPG tool detects these faults using the
same clock for launch and capture. However, the tran-
sition fault test coverage can improve further if the tool
considers faults that cross the clock domains. Testing
these faults requires sequences that use different launch
and capture clocks.
Because the ATPG tool cannot detect faults in differ-
ent clock domains simultaneously by using a clock
sequence with the same launch-and-capture clock, the
tool analyzes the fault list and classifies it according to
the clock domains, thus splitting up the fault list, before
test generation. This lets several test generation process-
es run in parallel—one for the faults in each clock
domain—without increasing the test pattern count. For
this experiment’s design, we classified the transition
faults into six groups: The first five groups contain the
faults for each clock domain; the last group (unclassi-
fied) contains all faults that do not fall into a single clock
Speed Test and Speed Binning for DSM Designs
22 IEEE Design & Test of Computers
Generatepath delay patterns
Grade for transition coverage
Generate additionaltransition patterns
Grade for stuck-at coverage
Generate additionalstuck-at patterns
Pattern optimization
Netlist Pathlist
Critical-pathpatterns
Transitionpatterns
Stuck-atpatterns
Figure 7. Efficient pattern generation for multiple-fault
models.
domain. Table 1 gives the
fault classification results.
To test the faults in
each clock domain, we
defined five named-cap-
ture procedures. They
constrain the clock
sequences during test gen-
eration for each clock
domain. As an example,
the named-capture proce-
dure used to test the faults
in clock domain Clk1 con-
sists of two cycles. All
clocks except Clk1 are set
to their off state in those
two cycles, and Clk1 is
pulsed in the launch and
capture cycles. Moreover,
driving PIs and measuring
POs are disabled from
ATPG in the second cycle so that at-speed events do not
depend on high-speed tester interfaces.
The last two columns in Table 1 show the test gen-
eration results by applying the five named-capture pro-
cedures for each clock domain. Before generating the
test patterns for the faults in the unclassified group, we
fault-simulated those faults by applying the test patterns
generated for each clock domain first. The resulting test
coverage for the unclassified fault group was 66.5%.
Next, we generated 106 additional test patterns target-
ing the remaining faults in the unclassified group to
improve the test coverage by 0.22%. In summary, ATPG
generated 29,029 test patterns, and the transition test
coverage achieved was 79.67%.
Table 2 summarizes the test generation results for
stuck-at and transition faults. The number of transition
test patterns in Table 2 is the number after the static com-
paction of all generated transition test patterns (the stat-
ic compaction removed 4,865 redundant transition test
patterns). Because the test patterns generated for the tran-
sition faults also detect many stuck-at faults, we must
apply an independently created set of stuck-at test pat-
terns to target faults that the transition test patterns did
not detect. Thus, the transition fault test patterns are fault-
simulated for stuck-at test coverage, and 89.4% of the
stuck-at faults are detected with the transition patterns.
Next, we simulated faults in the original stuck-at test pat-
terns and found that 3,881 test patterns were required to
detect the remaining 7.16% stuck-at faults covered with
the full set of stuck-at patterns. Thus, 28,045 test patterns
(24,164 + 3,881) were necessary to achieve the maximum
possible stuck-at test coverage and the best possible tran-
sition test coverage.
For this example, the tester can hold only 15,000 test
patterns, and stuck-at test coverage cannot be sacri-
ficed. So the transition test set must be truncated to fit
in the tester memory. To minimize the loss of transition
test coverage due to test pattern truncation, we apply
the following steps.
First, we apply a test pattern ordering technique10 to
order the stuck-at fault test set based on the stuck-at
fault model. We record the test coverage curve after
applying the ordered test set (Cstuck). Second, we apply
the test pattern ordering technique to order the transi-
tion test patterns based on the transition fault model.
Third, we fault-grade the ordered transition test set by
using the stuck-at fault model, and record the test cov-
erage curve (Ctran).
Fourth, we must determine the number of stuck-at
patterns (Nstuck) and transition patterns (Ntran) to reach
the best possible transition test coverage while reach-
ing the maximum stuck-at coverage (96.56%). The com-
bination of stuck-at patterns and transition patterns
must be less than 15,000. We determine the pattern
count mix by using the curves obtained from the first
and third steps, as Figure 8 shows. We begin by look-
ing at a point on the Ctran curve that relates to a specific
number of transition patterns (Ntran). This point will also
23September–October 2003
Table 1. Transition fault distribution by clock domain.
Clock domain No. of faults in domain Test coverage (%) No. of test patterns
Clk1 1,255,898 85.21 2,165
Clk2 381,764 72.45 24,799
Clk3 82,610 81.21 1,024
Clk4 50,628 77.73 638
Clk5 48,810 83.03 297
Unclassified 376,958 66.50 NA
66.72 106
Total 2,196,668 79.67 29,029
Table 2. Test generation results before test pattern truncation.
Test No. of test Stuck-at fault
Fault type coverage (%) patterns simulation coverage (%)
Stuck-at 96.56 7,557 NA
Transition 79.67 24,164 89.4
define the stuck-at test coverage, TC, if the Ntran patterns
are run. Next, we go to the Cstuck curve at the same test
coverage. This represents the stuck-at coverage starting
point once the Ntran transition patterns are applied. The
number of stuck-at patterns from this point to the last
stuck-at pattern is Nstuck. It represents an approximation
of the number of stuck-at patterns needed to supple-
ment the transition patterns and achieve the maximum
stuck-at test coverage. If (Ntran + Nstuck) is greater than
15,000, then we select a smaller Ntran. If the sum is con-
siderably smaller than 15,000, then we select a higher
Ntran. We chose an Ntran of 8,307 for this experiment. The
transition test coverage and stuck-at test coverage
achieved by the 8,307 transition test patterns were
77.15% and 88.95%, respectively.
Fifth, once Ntran is determined, we target the stuck-at
faults that the Ntran transition test patterns do not detect
but that the original stuck-at test patterns do. We per-
form a new ATPG run to regenerate stuck-at test patterns
that detect them. The number of newly generated test
patterns is 3,372. (If performing an additional ATPG run
is not desirable, we could simulate faults and select the
test patterns from the original stuck-at test set that detect
the remaining faults instead. For the design under the
experiment, this would require approximately 4,234 test
patterns from the original stuck-at test set.)
Finally, because the total number of test patterns
(8,307 plus 3,372) is less than 15,000, we add 3,321 extra
test patterns from the ordered transition fault test set.
This improves the transition test coverage from 77.15%
to 78.28%.
The final test set of the design includes 15,000 test pat-
terns. The stuck-at test coverage achieved was 96.56%,
and the transition test coverage was 78.28%. Due to the
test pattern truncation required to fit on the tester, 1.39%
of the possible transition test coverage was lost.
Because the at-speed test strategy in this case holds
PI values constant, treats all POs as nonobservable, and
ignores the faults in cross-clock domains during test
generation for transition faults, the highest transition test
coverage achieved was only 79.67% before test pattern
truncation. However, the ATPG tool determined that
99.91% of all transition faults were classified. This means
that most of the undetected faults were ATPG
untestable. If we could remove these constraints, we
could substantially increase the transition test coverage.
However, it is impractical to change PI values and mea-
sure POs when using a low-cost tester to test the high-
frequency chips at speed.
SCAN-BASED AT-SPEED TESTING is becoming an effi-
cient, effective technique for lowering the high cost of
functional test in detecting timing-related defects. It’s
important to reiterate the cost benefit of using the on-chip
programmable PLL circuitry for test purposes. These high-
frequency clocks are available on chip instead of having
to come from a sophisticated piece of test equipment.
Future work related to at-speed ATPG includes
more-precise ATPG diagnostics of timing-related
defects to facilitate timing-defect failure analysis and
strategies to improve the quality and effectiveness of
at-speed ATPG. Researchers must determine how much
yield loss occurs from at-speed tests of nonfunctional
paths during launch-off-shift patterns, and what the
value is of detecting timing defects in nonfunctional
paths. Our results demonstrate that deterministic ATPG
targeting each stuck-at fault site multiple times will
reduce defects per million (DPM) more than single-
fault detection. A follow-on to this work is to apply the
multiple-detection ATPG technique to transition fault
testing. There are also investigations regarding merg-
ing physical silicon information to identify potential
defect locations for ATPG and to aid in diagnosing
physical properties that cause defects. �
AcknowledgmentsWe are grateful for discussions and contributions
from Cam L. Lu and Robert B. Benware of LSI Logic
regarding efficient merging of transition and stuck-at
pattern sets.
Speed Test and Speed Binning for DSM Designs
24 IEEE Design & Test of Computers
Stu
ck-a
t tes
t cov
erag
e (%
)
No. of patterns
Stuck-at coverage forstuck-at patterns (Cstuck)Stuck-at coverage gradefor transition patterns (Ctran)
Nstuck
Ntran
TC
100
50
0
Figure 8. Stuck-at coverage for transition and
stuck-at patterns.
25September–October 2003
References1. P. Nigh et al., “Failure Analysis of Timing and IDDQ-Only
Failures from the SEMATECH Test Methods
Experiments,” Proc. Int’l Test Conf. (ITC 98), IEEE
Press, 1998, pp. 43-52.
2. G. Aldrich and B. Cory, “Improving Test Quality and
Reducing Escapes,” Proc. Fabless Forum, Fabless
Semiconductor Assoc., 2003, pp. 34-35.
3. R. Wilson, “Delay-Fault Testing Mandatory, Author
Claims,” EE Design, 4 Dec 2002.
4. J. Gatej et al., “Evaluating ATE Features in Terms of
Test Escape Rates and Other Cost of Test Culprits,”
Proc. Int’l Test Conf. (ITC 02), IEEE Press, 2002, pp.
1040-1048.
5. D. Belete et al., “Use of DFT Techniques in Speed Grad-
ing a 1GHz+ Microprocessor,” Proc. Int’l Test Conf. (ITC
02), IEEE Press, 2002, pp. 1111-1119.
6. N. Tendolkar et al., “Novel Techniques for Achieving
High At-Speed Transition Fault Test Coverage for
Motorola’s Microprocessors Based on PowerPC Instruc-
tion Set Architecture,” Proc. 20th IEEE VLSI Test Symp.
(VTS 02), IEEE CS Press, 2002, pp. 3-8.
7. J. Saxena et al., “Scan-Based Transition Fault Testing:
Implementation and Low Cost Test Challenges,” Proc. Int’l
Test Conf. (ITC 02), IEEE Press, 2002, pp. 1120-1129.
8. J. Boyer and R. Press, “New Methods Test Small Memo-
ry Arrays,” Proc. Test & Measurement World, Reed Busi-
ness Information, 2003, pp. 21-26.
9. J. Rajski et al., “Embedded Deterministic Test for Low
Cost Manufacturing Test,” Proc. Int’l Test Conf. (ITC 02),
IEEE Press, 2002, pp. 301-310.
10. X. Lin et al., “On Static Test Compaction and Test Pat-
tern Ordering for Scan Design,” Proc. Int’l Test Conf.
(ITC 01), IEEE Press, 2001, pp. 1088-1097.
Xijiang Lin is a staff engineer for theDesign-for-Test products group atMentor Graphics. His research inter-ests include test generation, faultsimulation, test compression, fault
diagnosis, and DFT. He has a PhD in electrical andcomputer engineering from the University of Iowa.
Ron Press is the technical marketingmanager for the Design-for-Test prod-ucts group at Mentor Graphics. Hisresearch interests include at-speedtest, intelligent ATPG, and macrotest-
ing. He has a BS in electrical engineering from the Uni-versity of Massachusetts, Amherst.
Janusz Rajski is a chief scientistand the director of engineering for theDesign-for-Test products group atMentor Graphics. His research inter-ests include DFT and logic synthesis.
He has a PhD in electrical engineering from PoznanUniversity of Technology, Poznan, Poland.
Paul Reuter is a staff engineer forthe Design-for-Test products group atMentor Graphics. His research inter-ests include ATPG, BIST, SoC test,low-cost test solutions, and test data
standards. He has a BS in electrical engineering fromthe University of Cincinnati.
Thomas Rinderknecht is a soft-ware development engineer for theDesign-for-Test products group atMentor Graphics. His research inter-ests focus on efficient implementations
of logic BIST. He has a BS in electrical engineeringfrom Oregon State University.
Bruce Swanson is a technical mar-keting engineer for the Design-For-Testproducts group at Mentor Graphics.His research interests include at-speedtest and compression techniques. He
has an MS in applied information management from theUniversity of Oregon.
Nagesh Tamarapalli is a technicalmarketing engineer for the Design-for-Test products group at Mentor Graph-ics. His research interests include allaspects of DFT, BIST, and ATPG,
including defect-based testing and diagnosis. He hasa PhD in electrical engineering from McGill University,Montreal.
Direct questions and comments about this articleto Ron Press, Mentor Graphics, 8005 SW BoeckmanRd., Wilsonville, OR 97070; [email protected].
For further information on this or any other computing
topic, visit our Digital Library at http://computer.org/
publications/dlib.