a new built-in self-repair approach to vlsi memory yield...

13
A New Built-In Self-Repair Approach to VLSI Memo Yield Enhancement by Using Neural-Type Circuits Pinaki Mazumder. lllcr, IE!:!:. and Ylh-Shyr Jlh. cmh, If.f. Abs(ct-A� VLSI ('hip siH' is rapidl� increasing. more and more circuit components are becoming inacce".,ibl.. for ,tenlal testing. diagnosis. and repair. Memor� arra�s arc widely us"d in VLSI chips. and restructuring of partiall� fault� arra� b.1 the available spare rows and columns is a l'omputationall� in- tractable problem. Conventional softwart' nH'mnr� -repair al- gorithms cannot he readil� implemented \I ithin a VLSI chip to diagnQse and repair these fault� memor� arra� s. Intelligent hardware based on a neural-network model PI'''' ides an eH' e- tile solution for such huilt-in self-repair (B1SR) applil'ations. This paper clearly demonstrate, how to represent the objectil e function of the memor�' repair prohlem as a neural-network nergy function. and ho to exploit the neural net ork 's ")lI- vergence pr()pert� for derh ing optimal repair solutions. T\\ 0 algorithms have heen developed using a neural network. and their performances are compared with the repair most (RM) algnrithm that is ('ommonly u,cd h� mcmorl chip manufactur- ers. F randomly generated defect patterns. the proposed al- gorithm II ith a hill-climbing (HC) eal)abilit� has been found to be successful in repairing memor� arra), in 9S case. as op- PQsed to Rl's 20 cases. The paper also demon,trates h(",. hy using l'er� small silicon merhl'ad. onc can impknll'nt this algorithm in hardwar ithin a VLSlchip for BISR nf memor� arrays. The proposed auto-repair approach i" S)H)\' n to im- proH the VI.SI chip yield by a significant fat:lor. and it can also impnne the life span of the chip b� automatically rcstru- turing its memory arrays in the c\ ent of sporadic cell failure, during the field liSt'. I. hT(llll'C II(), A s THE VLSI device techn()l()�il'al feature width I" rapidly decrea,ing tu the range ()t hundreds uj nanometers. the chip yield tends to reduL'c progressively due to incrcased chip area. compk.\ fabrication pro- cesses. and shrinking device gcometries. In order sal- vage partially faulty chip". redLlndant Clrl'Ult ekI1ll'llt, arl' incorporated in the chip". and an apprupriate rel'unhglll' ation scheme is employed to by pa" and replace the fault\ clements. In high-den,ity dynamic random-access mem- \fanucnplll"Cl"l\cd \1a� 1. 199(), rc\i.cd JaJlll�tr.' 10. Iyl):. I'ill" \\\l!� \v :.uppot1ed in part h.' th e ;JI]()nai SCh'TlCl' FlllJrllLI\lllll lInJI'f (;ranl '1I)l� 4013CN�, h� th 011icc of Id\,ll RC�ardl undn (iran! ()()014-k K 0122. Jnd by the L.S. Anlll URI Pn'8ralil under (;ran( Il\AL O.1·S7·" (loll" Thi paper wa� recoillmended h\ :\OClatc EdirOl F Hrekl P. \1<.uumdcr 1 IAlth the UpI1Illcnt (1j Cl"tllcd ln:I ICl'[ln� clIH.I (',)111 puler SCll'ncl', The Unl\er Jty ill" i hlflUr1. Ann . \ rho; . 1I X�!{)() Y.-S. Jih \\l \\ith the t'�i\'(rit\ ()r�ichI;ln '\nll Arhnf \11 IIL'I' nO WIth IB T.J. \ahon Rc.car: h ("enkr. I)r�t\l\'.n Height. y IEEE Lug �lIlllbcr n009119. (lr) (DRAM) dllp'. redundant rows and column, are added to reconh�ure the Iellwry subarrays, where the rm" or Clllul in which defective cells appear. are e I illllilated by using techllil)ues such as electrically pro- gramillable latche, or laser p l'lSOnaI17ation. The problem (If ()ptimal reconti)uration and spare allocation has been \\idel) studied hy many researchers [ IJ. [91. [10[. [14]. [15[. [29[. A review of these algorithms for memory di- agnosis and repair can he seen In [4[. But all of these algorithlm cannot he readily applied to embedded arrays. where they arc neither l'lllltrollable by exteal testers. nor arc their re'ppns" readily observable, Built-in ,df-test (BIST) cirl'ults are c()mlllonly Llsecl comprehensively test ,uch emhedded arra s and had chips are discarded llhcn thev lad III pass the relevant test procedures. In or- (lcr 1(1 ,va�e thc partially tault: chips, new built-in self- I'cpair (BISR) circuits lllu.,t he developed for optimal re pair ()f the fault\ l1lemoryarravs In this papl'. a n()\cl self-repair scheme is proposed th:JI usc, a huilt-in nural llt\\()rk to determine how to automatic'ally repair a faulty memory array by utilizing thc ,pare ro\\ s :Jnd column TIVO algorithms have been propo,cd to illustrate Illl\\ a neural network can solve ran- dom delcc'!s. and theIr perf(lrm<lnces arc compared with Iltc' L'OIllClltiOll<l1 sllft\\are rcpair algorithms, such as Re- {Jair Atu.I! [2[. The Repair Most (RM) algorithm IS a grcedy algmithlll that itcrati\l'l\ a,signs a spare row (col- utlln) [() replan' the mw tl'plutlln) v.hich currently has the maXlmutll unco\ered cl'lls until all the cells arc covered pr hypassed (providing a successful solution). or all the 'pare rows and c()lumns an: exhausted (failing give a repairabk solutilln. if all fault) cell, are llot bypassed). whichnl'l ()n'ur, hr,t This algorithm is selected for comparisoTl hccau,e it i, relati\cly simple and can be im- plemcnted b� digital h'lrLiv.are, unlike other software al- gorithm,. such as ,itllulatcd annealing or approximation algorithm, A hardwarL' \ er,ioTl of simulated annealing will rc'(jllirc ;j (iallssian white noise generator to generate the random 1ll0\ cs and a logarithmic circuit schedule the temperaturc. The llther heuristic-based algorithms (lik,' FLeA. LECA. and BECA [IS], and hranch-and- hOLllld [11) will rel)ulre 1llIcnll'ode-driven complex digi- tal c'uih fpr ,otv ing the prohlem by hardware. The per- fOrInancc of ncural algorithms has been compared with

Upload: others

Post on 22-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

A New Built-In Self-Repair Approach to VLSI

Memory Yield Enhancement by Using

Neural-Type Circuits Pinaki Mazumder. J'v/ellliJcr, IE!:!:. and Ylh-Shyr Jlh. Iv/cmhCl, If.!:f.

Abs(ract-A� VLSI ('hip siH' is rapidl� increasing. more and more circuit components are becoming inacce".,ibl .. for .. ,tenlal testing. diagnosis. and repair. Memor� arra�s arc widely us"d in VLSI chips. and restructuring of partiall� fault� arra�., b.1 the available spare rows and columns is a l'omputationall� in­tractable problem. Conventional softwart' nH'mnr� -repair al­gorithms cannot he readil� implemented \I ithin a VLSI chip to diagnQse and repair these fault� memor� arra� s. Intelligent hardware based on a neural-network model PI'''' ides an eH'el'­tile solution for such huilt-in self-repair (B1SR) applil'ations. This paper clearly demonstrate, how to represent the objectil e function of the memor�' repair prohlem as a neural-network {'nergy function. and ho\\ to exploit the neural net\\ ork 's ")lI­vergence pr()pert� for derh ing optimal repair solutions. T\\ 0 algorithms have heen developed using a neural network. and their performances are compared with the repair most (RM) algnrithm that is ('ommonly u,cd h� mcmorl chip manufactur­ers. For randomly generated defect patterns. the proposed al­gorithm II ith a hill-climbing (HC) eal)abilit� has been found to be successful in repairing memor� arra), in 9S'7t case.,. as op­PQsed to R:\l's 20'1 cases. The paper also demon,trates h(",. hy using l'er� small silicon merhl'ad. onc can impknll'nt this algorithm in hardwarl' \\ithin a VLSlchip for BISR nf memor� arrays. The proposed auto-repair approach i" S)H)\' n to im­proH the VI.SI chip yield by a significant fat:lor. and it can also impnne the life span of the chip b� automatically rcstrul'­turing its memory arrays in the c\ ent of sporadic cell failure, during the field liSt'.

I. hTJ{(llll'C II(),

As THE VLSI device techn()l()�il'al feature width I"

rapidly decrea,ing tu the range ()t hundreds uj nanometers. the chip yield tends to reduL'c progressively due to incrcased chip area. compk.\ fabrication pro­cesses. and shrinking device gcometries. In order to sal­vage partially faulty chip". redLlndant Clrl'Ult ekI1ll'llt, arl' incorporated in the chip". and an apprupriate rel'unhglll' ation scheme is employed to by pa" and replace the fault\ clements. In high-den,ity dynamic random-access mem-

\fanu...,cnplll"Cl"l\cd \1a� 1. 199(), rc\i:-.cd JaJlll�tr.' 10. Iyl):. I'ill" \\\l!� \vl1:" :..uppot1ed in part h.' the !\;JI]()nai SCh'TlCl' FlllJrllLI\lllll lInJI..-'f (;ranl '1I)l�

4013CN�, h� the- 011icc of ;\Id\,ll RC�L'ardl undn (iran! ()()014-k:) K 0122.

Jnd by the L.S. Anlll URI Pn'8ralil under (;ran( Il.-\AL O.1·S7·" (loll" Thi:-, paper wa� recoillmended h\ :\:-,.,OClatc EdirOl F Hrekl

P. \1<.uumdcr 1:-. IAlth the U('"p�I1Illcnt (1j 1:::JCl"tllcd l-:.n:..!I� ICl'[ln� clIH.I (',)111 puler SCll'ncl', The Unl\er .... Jty ill" l\1i .. hlflUr1. Ann . \ rho; . :Y1I -+X�!{)()

Y.-S. Jih \\l.l:--' \\ith the t'�i\'('"r:-.it\ ()r�J\1ichI!!_;ln '\nll Arhnf \11 IIL'I'

nOlA WIth IB\1 T.J. \\/ahon Rc:-.car:h ("enkr. ') I)r�t\l\'.n Height'-.. \,y IEEE Lug �lIlllbcr n009119.

(lr) (DRAM) dllp'. redundant rows and column, are added to reconh�ure the Illellwry subarrays, where the rm" or Clllull1[]!' in which defective cells appear. are e I illllilated by using techllil)ues such as electrically pro­gramillable latche, or laser p l'lSOnaI17ation. The problem (If ()ptimal reconti).!uration and spare allocation has been \\idel) studied hy many researchers [ IJ. [91. [10[. [14]. [15[. [29[. A review of these algorithms for memory di­agnosis and repair can he seen In [4[. But all of these algorithlm cannot he readily applied to embedded arrays. where they arc neither l'lllltrollable by external testers. nor

arc their re'ppns",> readily observable, Built-in ,df-test (BIST) cirl'ults are c()mlllonly Llsecl to comprehensively test ,uch emhedded arra., s and had chips are discarded llhcn thev lad III pass the relevant test procedures. I n or­(lcr 1(1 ,,-dva�e thc partially tault: chips, new built-in self­I'cpair (BISR) circuits lllu.,t he developed for optimal re pair ()f the fault\ l1lemoryarravs

In this papl'l'. a n()\cl self-repair scheme is proposed th:JI usc, a huilt-in nc:ural llc:t\\()rk to determine how to automatic'ally repair a faulty memory array by utilizing thc ,pare ro\\ s :Jnd column,,- TIVO algorithms have been propo,cd to illustrate Illl\\ a neural network can solve ran­dom delcc'!s. and theIr perf(lrm<lnces arc compared with Iltc' L'OIllClltiOll<l1 sllft\\are rcpair algorithms, such as Re­{Jair Atu.I! [2f\[. The Repair Most (RM) algorithm IS a grcedy algmithlll that itcrati\l'l\ a,signs a spare row (col­utlln) [() replan' the mw tl'plutlln) v.hich currently has the maXlmutll unco\ered cl'lls until all the cells arc covered pr hypassed (providing a successful solution). or all the 'pare rows and c()lumns an: exhausted (failing to give a repairabk solutilln. if all fault) cell, are llot bypassed). whichnl'l ()n'ur, hr,t This algorithm is selected for comparisoTl hccau,e it i, relati\cly simple and can be im­plemcnted b� digital h'lrLiv.are, unlike other software al­gorithm,. such as ,itllulatcd annealing or approximation algorithm, A hardwarL' \ er,ioTl of simulated annealing will rc'(jllirc ;j (iallssian white noise generator to generate the random 1ll0\ cs and a logarithmic circuit to schedule the temperaturc. The llther heuristic-based algorithms (lik,' FLeA. LECA. and BECA [IS], and hranch-and­hOLllld [1-1-1) will rel)ulre 1llIcnll'ode-driven complex digi­tal c'm:uih fpr ,otv ing the prohlem by hardware. The per­fOrInancc of ncural algorithms has been compared with

Page 2: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

MAZCMDER AND YIH: V I.!', I ME\lORY YIELD ENHANCEMENT

RM by running these algorithms with repairable fault pat­terns and examining what percentages of these fault pat­terns can be repaired by them. The simulation studies made in this paper demonstrate that a neural network us­ing a hill-climbing (HC) capability provides a fast and good solution for a repairable array. The technique is im­plemented within the chip by an electronic neural net­work, and it promises to be a potential application area for neural networks.

Historically, the memory repair problem dates back to the evolution of 64 Kbit DRAM's in the late 1970's [25], when, to improve the chip yield, two or three extra rows and extra columns were added to each quadrant consisting of 64 Kbit suharray of memory cells. Simple greedy [2], [28] and exhaustive search II J algorithms were developed to rcpair the faulty memory subarrays. Since the search space was very small for such problem siLes, all these algorithms provided high throughput in memory diagnosis and repair. But as the memory size has increased to sev­eral megabits over the last few years, the defect patterns have become sufficiently complex and the search space has grown extensively. The problem of memory repair has been shown to be NP-complete [9]. [141 by demonstrating that the repair problem is transformable in polynomial time to the Constrained Clique problem in a bipartite graph. A number of heuristic algorithms, such as branch­and-bound [14], approximation [14 L best-first search [10], and others [15], [29], recently have been proposed to solve the memory array repair problem. The two key limitations of these algorithms are a) that their worst-case complexities are nearly exponential and h) they are not readily implementable within the chip for BISR. These algorithms are generally written in a high-level program­ming language and are executable on general-purpose computers. This paper addresses two fundamental aspects of the memory repair problem: how to devise efficient al­gorithms so that the overall production throughput im­proves along with the chip yield, and how to generate such repair algorithms in hardware so that they can be applicd to repairing memorics, embedded within the VLSI chips. The contribution of this paper is to demonstrate how to solve the array repair problem by using neural networks. and how to implement a BTSR scheme in hardware. A neural network can produce solutions by the collective computing of its neuron processors with far faster speed than the abovementioned sequential algorithms running on conventional computers. Since these neural processors are simple threshold devices. the basic computing step of neural nets is comparable to the on-off switching of a transistor [7], [8], 122 L [24]. Another potential advantage of using neural networks is that they arc robust and fault­tolerant in the sense that they may compute correctly even if some components fail and/or the exciting signals are partially corrupted by noise. Thus the reliability of the self-repair circuit using electronic neural networks is very high. This has heen experimentally verified in Section VI.

This paper has been organized as follows: Section II provides a brief overview of the neural network model

125

and its dynamic behavior. Section III provides a formal framework for the memory repair problem using the con­cepts of neural net computation. Two algorithms are de­veloped by programming the synaptic weight matrix of the neural net. The first algorithm is greedy, and starting from any random configuration (i.e., arbitrary neuron states), it can solve the problem by monotonically reduc­ing the energy function of the neural net. The second al­gorithm uses the HC technique to yield a near-optimum solution. Section IV gives the simulation results of the neural net algorithms. From the simulation experiments, it is seen that the neural net algorithms are superior to the RM algorithm. An electronic implementation of neural networks is demonstrated in Section V. The intrinsic fault­tolerant ability of a neural net is studied in Section VI. The yield improvement and hardware overhead caused by the neural-net self-repair circuitry are examined in Sec­tion VII.

II. NELRAL NET COMPUTATION MODEL The earliest mathematical formalization of the human

nervous system can he found in the work by McCulloch and Pitts [19], where it was modeled a, a set of neurons (computation elements) interconnected by synapses (weighted links). Each neuron in the model is capable of receiving impulses as input from and firing impUlses as output to potentially all neurons in the network. The out­put function of a neuron depends on whether the total amount of input excitation received exceeds its predeter­mined threshold value 0, or not. The state of neuron i is represented by an all-or-none binary value Sj, with Sj = 0 being a nonfiring condition and Si = I denoting an im­pulse-firing condition. Note that one may represent a non­zero threshold by applying an external impulse bias hi to the neuron i with hi = 0"

Interactions between two dift'erent neurons occur through a synapse serving as a duplex communication channel. The signals passing through the synapse in both directions are considered independent. Moreover, the sig­nal traveling from neuron i to neuron j is amplified by a weight factor WI" i.e., if the impulses fired by a neuron correspond to a unit influence, then a firing neuron i pro­duces Wji, amount of influence to neuronj.

Now, let !Ii denote neuron i's next state value. From the above description one can represent the neural net state transition function as follows: [0, if 2.: w,) s) < OJ

I

sf = I. if 2..: W,;5j > Oi I

Sf' otherwise.

Actually, in the human nervous system or any other biological implementations, neural networks make smooth continuous transitions when the neurons change their states. Therefore. we assume that neurons which receive a larger influence in magnitude will have a faster state

Page 3: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

I'f\

tran�iti()n, Neuron proce"ors are expected to operalc' in an asy nchronous and randllm fashion ,

Following the description of the behav ior of an indivId­ual neumn. let us look at the overall n.:(work state tran­sition behavior, First. a netwmk state is dcnoted by the vector of ncuron statc va riablcs, Givcn a network state vector X = (XI. h . . .. . X\ l. X is called a/i�ed point, if and only if x; - x, for all i's, That i" ()r all the pllSsihk 2\ states of a neural net. only the fixed points arc cllnsid ered stable. If the current network state is not onl' nf the fixed points. the neural net cannot maintall1 the pn;sent state. One imlllediate question about the network hehavior could concern the state conv'Crg cnce: starting with an ar­bitrary statc. will the network cventually reach a fixed

poin t? To answer the question. an energy funct ion has

been formulated, First. let us conside r the Glse <If an ex­citatory or plls itive input to a neuron, Al'cording to the neuron's functional de finition . this input will cncouragl' the neuron to tire impubes, If the neuron is already in the firing state. this input can be looked upon a, negativc en­ergy. since hy convention. a sySklll is mllrC stahle when the energy lev'cl is lower, Likewise. an inhibitory or ne'g ative input to a neuwn in the nonfiring state should abn be considered as ncgat i ve cnergy, On the ()ther hand. pos i t i ve energy is created if a firing neuron receives an inhibitory input nr a nlln firing neuron n:ccives an excita­tory input. since the network i, potentially destahilizL'd hv the input.

If \I', = \I'" . fm all i's and j's, the tutal energy r;\ \. commonlv known as the Lyapunov lunction 1111. is gi\'l.:n by

\ \ 1\, \' " E = - :: � L..J .,{ \\1 "', , \. �Ii,.I"

Note that the change of sta te by neuron i will result in an absolute decrease of I:, \\'1,' S, + I (.1, ) h, in energy . where

I(s, 1 = I if I, = O. and - I otherwlsl: T llgether with the fact that the total encrgy is houndcd . the net\\ urk is guar­anteed to reach a local min imal energ� fixed pll int 1121,

III Nf-l K\J. NI',Tw()KK S()1l II()N,

The idea of using a neural network to tackle combina­torial op timization problems was first proposed by H op­field [12]. who designed and fahricated a neural network

for solving the classic J'ral'eling Sale,l//wll Problem 151. The objective (cost) function of a comhinatorial optimi­zation problem can he represented by Lvapunov' s energ)

function of the neural network. and the netlvork's prop­

erty of convergence from a random inilial state to a lclcal m inimal energy state can he ut ili zed to reduce the L,,,,t function of the comhinatorial prohkm , However. thc neural computing approach g iven in 1121 leads only to the design of a greedy gradient descent I G D) algorithm \\ hich stops searching at the fip,t local minimal �Lllution encoun­tered; consequently. the neural netwc)rk solution IS gen­erally of low quality,

In ' this section \\ e studv the convergence hehavior ot neural networks, We denl()nstrate that more p()\l ertu l

,carchinu str;.llCUle, can he dc\ eloped hy modifying the ,imple e,D algl;rithlll, By programming the neura l nl't­v\ork ill �ll appropriak v\a�. vvc lind that the ncund nets

arc capable of searching the solution by He or tunneling, Thus. the probab ility of ohtaining a globally optimal so­lution h) lIsing ncural net\\llfks is Icry high ,

A, Arm\' Ri'/Jilil' Pl'u/JlclII RC'{JI'i'.IC'II/u/i(i/I Assume a 1lll'IlH)J"\ array ()f size: ,v x N with exactly p

spare rows and (/ spare �()lumns that can he utilized to

replace the defectivc rows and columns, Assume an ar­hltrary fault pattern in whll'h taulty cells random l y lKcur

on III « N) disllllL't mws and 1/ « N) d istinct columns of the memory arrav. such that the l'ompacted subarray of ,izc II/ ;< /I Cl;ntain� all re!c:vant mv" and columns wh ich havc at least one faultv ccll, Let the matrix D = {Ii,,} of sire 1/1 x 1/ characteri/e the status of the memory cells. such that il" . which corresponds to the cell at row i and column j. is I if the cdl i� faulty. and Ii" = O. otherwise, Normally . vcr) fe\\ cel ls in the array are faulty. and the l'haracteristic matrix, D. l'i highly sparse. A row repair scheme can be represented a., a vector U of m hits. such

that II, = 0 if mVI i " to he replaced , otherwise 11, = 1. 0 � i � III - I h, colullln repair scheme is defined in the

same ta,hion. and is denoted hy a vector V of 11 bits , The numhers of () ' s in U and V must be less than or equal to fI and (/. respectively, The memory is said to be repairable if the above c(lThtraints are satisfied . and essentially the

repair prohlem is how to detennmc a pair of U and V. sLlch that U I D V = 0,

In additi()n to the characteristic matrix D. let the overall statu.; PI' row i and l'lllllllln.i of the array be characterized by r, and /.: ,. respectively.

I' it' \'0\\ I l'(lI1tains defective element;,

otherwise

if eolumn .i cpntains defective clements

otherwise,

To design a neural net f(lr the memory repair problem. a neural net ot SiLl: 1".1 = II/ , 1/ is used. where the states of the tirst III neuron, arc denoted by ,l'I,'s. lsi s III, and the states of the rcmalt1lllilll neurons by ,I" '". lsi s II, For case of interpretation . the first /1/ neuron, arc addrc"ed as I'ml' I/eurolls. and the remaining II neurons as lO/1I11111111:'1IWI/S, The physical mean ing (lfthese neuron states arc defined a, follows'

if rn\\ ii, ,uggested for replacement

if colullln .i is sugge'-led for replacement

l)therwisc,

I he Cllst funct ion C Ii A' of the Illemory repair problem should represent hoth the nonfeasible repairing and in­complete covera�c schcme's h� higher costs, These tWe)

Page 4: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

MAZUMDER AND YIH: VLSI MEMORY YIELD E]\HANCEMEN I

aspects can be modeled by the following two expressions:

so that the overall cost function is represented as an al­gebraic sum of C1 and C2. In the above expressions, the values of A and B are empirically decided and as shown in Section IV, their relative values are critical for obtain­ing the appropriate percent of successful repairs.

The expression for CI encourages exactly p spare rows and q spare columns to be used in the repair scheme. For those nonfeasible schemes that require more than the available spare rows and/or columns, the cost will in­crease quadratically. If necessary, the optimum usage of spares can be experimentally attempted by repeating the search operation with successively reduced amounts of available spares (i.e., progressively decrementing the value of p and/or q in the expression for C1), until no successful repair scheme can be found.

The expression for C2 monotonically decreases as more and more defective cells are covered by the spare rows and columns. If all defective elements are covered by at least one spare row or column, the above expression is then minimized to zero. Unsuccessful repairing schemes which fail to cover all defective clements will yicld positive costs. The two double summation terms in C

2 are

equivalent. As it will be seen later in the expression for the synaptic weight matrix, this duplication facilitates the later transformation to neural net energy functions.

Another cost function can be defined by modifying the CI expression, i.e.,

C; = A/2 (� �I,)2 + A/2 ( ± S21 )2

,- I 1- I

Note that Ci is actually a simplified CI with p and q set to zero. Without considering whether all defects will be covered or not, the expression for C; prefers repair schemes with fewer spares used. Intuitively, this way more efficient repair schemes may be encouraged. But the new C; + C2 combination has a potential danger of hav­ing a large amount of unsuccessful repair schemes with a small number of uncovered defects mapped to local min­ima just because they request less spares. The differences between the neural networks defined by these two cost functions will be further illustrated later in the simulation studies.

B. The Gradient Descent (GD) Approach It may be recalled that the neural network energy func­

tion is ENN = -1/21:; 1:jw;jS;Sj - 1:;s;b;. By rewriting

CMR in the form of ENN, the weights of the interconnect-

127

ing synapses and the intensities of the external biases to neurons can be detennined through simple algebraic ma­nipulations.

But in order to keep the main diagonal of the neural net's synaptic weight matrix zero, all A/2 . 1:s� ternlS in the array repair cost expression are rewritten as A /2 . 1:5,. Since s, E {O. I}, these two tenns are mathematically equivalent. However, A /2 . 1:s� corresponds to self-feed­back through a synapse of strength -A for all ncurons, and A /2 . Es; means a bias of -A /2 for every neuron. The resulting neural network is shown as follows:

WI,.lj = -A(l - (jij) W2;.2j = -A (l - (ji})

b,; (p- I /2)'A+BEAj

b2j (q - 1/2) . A + BE;d;j where {j;j = 0 if i ::f::. j, otherwise 1.

In order to illustrate how to construct the synaptic weight matrix from these values, let us consider a path­ological defect pattern that consists of 16 faulty cells as shown in Fig. I. If there are only four spare rows and four spare columns. a greedy algorithm such as RM can­not produce a successful repair scheme [I]. The only suc­cessful allocation of spares for replacement is indicated in the figure by horizontal and vertical stripes. But due to the presence of four defects in both row 1 and column 9, the RM algorithm will be tempted to make the first two repairs on them. The algorithm will thereby fail to repair the memory because, in addition to the available six spare rows and columns, the algorithm will require two more columns or rows to cover all the defects along the diag­onal direction. The neural network algorithm described in this section can repair successfully such a defect pattern by finding the unique solution shown in Fig. 1. For such a memory fault pattern, the neural net synaptic weights will be estimated by adjusting A / B = 1 (the value A =

2 has been chosen empirically and is not critical for the performance of the algorithm) in the above expressions. The resulting neural net synaptic weight matrix and the biases are given in Fig. 2.

C. The Hill-Climbing (HC) Approach We propose here a neural net with simplified He be­

havior that provides high-quality solutions similar to the powerful combinatorial technique, called Simulated An­nealing [13]. First, let us note that the solutions to the memory array repair problem are classified as success and failure. In fact, it is commonly known that all optimiza­tion problems can be transformed to a series of yes/no questions [5]. Due to this specific criterion, we do not consider any random move that may increase the energy of the system until the move is absolutely necessary, i.e., the search has reached an unsuccessful local energy min­imum. If the current local minimum does not yield a suc­cessful repair, moves that increase the system energy are

Page 5: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

128 IEEE TR,\:\S,\CTIO:\S Oi\ e"'II't ILR'",li)�,J) 1>1.'>1(" IJI l'lHd',\II.lJ lIKll.l' .\:\[) '\.'>I�\I.'>, vOL 12. 1\0. I. JAl\l.�R) l'!'i.'

Fl� \ rl.'p,llrdh!c 1,1U1t� d[T�!� \\1111 !\llll· .,P,,l"l' ro'.\., aJld lnur "p'lr..: ,-1\,

1111ln.,

0 -, -, -, -, -, -, 0 -, -, 0 c -, -, -, -, 0 -, 0

0 -, -, c -, -, -, 0 -,

- , - 2

-, - 2 -2 -2 -2 -2 -, 0 -, -2 0 -, 0 -, -, -, -, -, -, -, -, -, 0 -, 0 0 -2 -,

W = -, -, _2 -, -, -, -, 0 0 0 0 0 -, -, 0 0 0 0 0 -, -, -, -, -, -, -, -, -, -, -, - , -, 0 -, -, -, -, -, -, -, 0 -, -, -, -, -, -, -, -, -, -,

-, 0 -, -, -, 0 -, -, -, -, -, 0 -, -, -, -, -, 0 -, -, -, -,

-, 0 2 -, -, _2 -, -2 -, 0 _2 -, -2 0 -2 0 -, -, -, -, -, -, -, 0 -, -,

0 -, -, 0 -, -, 0

b's = 15.9,9,9.9,11.11, 1l.1I. 7.11, ll.ll, 11,9.9.9.9,15, 7.

allowed hy providing negative synaptic: feedbad to neu­rons themselves. The hasic idea is that when the ,earch has reached an unsuccessful local minimum. we ca rl l'orL'l'

the neural net to make a move hy turning on a neuron. whieh appears to enter a lower energ) state but. in fact. will increa,e the system energy due to the negative ,el f­feedback. Next. the .'>ystem is expected to rurn ()If .'>0I1\L·

other neuron to enter a new lower energy .'>talc. tllU.'> es­caping the local minimum energy trap, Notice that with very low probability. it is po.'>sible for a neuron to fall inID a loop of alternate Of} and ott states. as it i, similarl) po,­

sible for simulated an rlealing ((] Cycle th rough randllill moves, Consequently. network behavior i, not admissible in the sense that it is guaranteed to COIl\ erge to an optimal solution state. As in simulated annealing. which uses a predefined criterion to hreak the inner loop of optimi/a­tion. a suitable timeout mechanism is neces,arv 1m He neural nets to prevent excessively long searches,

We being [he dcseription of the new neural net b) gll­ing the synaptic weights and biases <I' follllWS:

WI i,I, -A: H'21.2 " -.4

H/! .'.-:!., -B d,/: � t '-.:' I I, �B d"

hi! P A + Br., d,;: hc' 1/ A � 81' eI,. ' �i

To examine the neural net. let us assume that .4 '* 0 and B = O. Thus. each row neuron will receive a positiH' P . A amount of bla�. and each column neuron Will reCl�i\l'

a positive 1/ . A amount of hias. Also. due to B = O. the row neurons are independent of the coluilln neuron'.. i.e' .

there arc: no ') naptic: COllIh:c'lions between the two groups. Suppose there are p' < fl row neurons that are in firing

state,: then each roll neuron w ill receive a positive (p -

Ii') . • j amount of influence. and all of them will be en­c'ouraged to fire, Similarly. if p' > p. all row neurons Ivill receil'e a negative Ip - p') , A amount of influence. and all will tend to turn otf.

Next. let u, assume that .-1 = () and B '* O. In this case. the aillount 01' inlluencc reccilL'd hy row neuron i is gIven h\

I.e . . B time.' the number of defective elements in row i that w[)uld he cllvered exclw,lvely by a replacement .

Those defective clements in row i that are currently cov­ered h) some column repairs will not contribute any ur­gency to foree replacement of this row,

Consider the silLlation where all the spares have heen utIlI/.ed (hypothetically) 1m repair. hut there is still a sin­gle defect left uncovered hy the current repair scheme, At this point. the number of defects inside a row or column IS the only influence on neurons. For the row and column that contains this let unu)\'Cred defect. a positive total mfluenl'C of B w ill be received by the two corresponding neurons, After this defect is cOIered by turning on. say. the rnw ncuron. which causes the usc of one more spare row than allowed. all the row neurons will now receive

an additional negative in tluencc ()f A due to the extra spare suggested, If we ChOllSC to have B > A. then the neural

Page 6: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

MAZUMDER AND YIH. VLSI MEMORY YIELD ENHANCEMENT

net will be stuck at the present state, making the repair scheme unsuccessful. On the other hand, if we have A > B, a\1 the current row spares will cover only one faulty element exclusively will now receive a net negative influ­ence equal to I B -- A I, thus causing the network to switch to a new state and give up on the current scheme.

IV. SIMULA nON STUDIES

In this section, simulation results are provided to dem­onstrate the superiority of the proposed neural computing approach for the problem of automatic array repair. Six hundred reduced army fault patterns of size lO x 10 (Case I ) and 20 X 20 (Case 2) randomly generated arrays with about 10, 15, and 20% faulty elements wcrc used in the experiments. As was explained in Section Ill, these small size arrays represent the actual defective rows and

column, in large-size memory arrays, some as large as a few million bits.

The performance of two GD neural nets GD and GD',

defined by cost functions C, + C2 and C; + C2, respec­tively, are compared. For both 10 X 10 and 20 X 20 arrays. the probabilities of finding a successful repair scheme versus the ratio B /A are depicted in Fig. 3. By controlling the value of B / A , the importance of the fault coverage over the spare usage can be manipulated.

According to the figure, the effectiveness of GD' is largely affected by how A and B are selected. When A is set equal to B, no successful repair scheme can be found by GD' for any one of the repairable fault patterns, just as was expected in Section III. For 10 X 10 arrays, the

percentage of successful repairs improves to about 50% as B I A increases to 5, and thereafter converges to about 42 %. GD' behaves similarly for 20 x 20 arrays. except that the peak performance happen> at about BIA = 1 2. Now, let k be the maximum of p and q, where l' and q are the maximum numbers of spare rows and columns al­lowed to form a successful repair scheme, respectively.

The B / A ratio for a peak value to occur i, found to be equal to k. Note that each firing row (column) neuron will send a - A amount of influence to disable all other row (column) neurons, and a row or column neuron is cn­couraged to fire by as low as a ---B amount of influence to cover faults in a row or column. In order to allow the neural net to explore wlutions using up to k spare rows

and k spare columns, we must have B :s; kA to keep up to k row or column neurons firing at the same time.

On the other hand. GD shows the advantage of its low sensitivity to how A and B are selected over the range of B I A > I. In a hardware implementation of a neural net­work chip, the ratio of B I A is likely to vary over a wide range because of processing parameter variations and the wide tolerance of resistive elements. Thus the perfor­mance of the neural network will remain uniform from chip to chip if GD is implemented as opposed to GD', whose behavior changes dramatically with different val­ues of B / A. The percentage of successful repairs obtained

by GD appears to have a peak value of about 50% at A =

'29

VI 60 H 'M <tl 50 0.. Q) H 40

...... :> "" 30 VI Vl lOxlO Q) C) 20 lOxlO C) :> 20x20 VI 10

"" 20x20 a

"" 0 0 6 8 10 12 14 16

B/A Fig. 3. Performance comparisons between GD and GD'.

B and declines asymptotically to about 42 % as B I A In­creases. It is not surprising that the percentages of suc­

cessful repairs obtained by GD' and GD are converging to the same value, since when B is much larger than A, the consideration for the spare usage i s o f little or n o ef­fect, compared with the consideration for fault coverage.

For the second experiment, the effectiveness of the RM algorithm is compared with GD. In order to provide an equal starting situation, the programmed neural net is started with all neuron processors in nonfiring states in­stead of other random combinations. We will use GD­

zero to denote this special case of the GD convergence. The performances of these two algorithms are compared in Table 1. Random defect patterns are generated for both cases, and the algorithms are applied to repair the faulty arrays with three different sets of available spares. From the results it was seen that on average, GD-zero is twice as successful as RM when the defect pattern is not very

large (represented by Case I) and few spare rows and col­umns are used. But GD-zero is about three to five times

more successful when the defect pattern is large (repre­sented by Case 2) and a relatively large number of spare rows and columns are used. As for the number of steps

taken to find a repair scheme. it is about the same for both algorithms.

Second, tradeo1ls between the use of two neural com­

puting methods arc examined, and the results are shown in Table II. For each defect pattern, a number of random trials are performed by each method. Average perfor­mance reaches consistency within one hundred random trials. As is expectec!, the simulation indicates that the HC approach is almost perfect in locating a successful repair scheme for repairable arrays due to its ability to perform global searches. The probability for the GD to reach a successful repair in a random trial is about 0.5. The av­

erage number of steps needed by He is about two to three times that needed by GD. The runtime for the GD algo­

rithm varies very little over a large number of experi­ments, but the HC algorithm is found to have a wide var­

iance in the number of steps to execute the search successfully.

The chances of getting a successful scheme achieved by the four methods are shown in Fig. 4 as percentages.

Page 7: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

]1()

'" � l?:l ,� � C1 m laC � rl " 30 'H '" '" 60 <l> () () " 40 '" 'H 0 20 '"

lO x [0 arrays

co 0' OJ 0' 0' 0'

10 15 20

% of defects

• He II GD-zero • GD E2l �

" o

10

20 x 20 arrays

15

% of defects

\,\HLI I

:\\"FR\(,' III �'-('( 1"'-,1 t! R11'\!f{" DD ..... I ISl RM \'\1l cn 1.1 :{I)

CU"l' Ca'>l' , -

C; of --------Ddech Sparc" R�l G[)-ll'[11 Srarl'� R\1 CD-fern

III U. � I .16 7\1 IY, 91 :25 X-1

15 d. -11 3� 57 ('I, 12} 1'1 7X 20 (-1, 5 ) 30 II =, 12 'I 15 � I

I \81.1 I I Pl"RI()I{\I\\'( I Cllr>..1!'\!<I\\)'\,\ H I \\1) '. (;'1) \\,,1, H e \11 k\1 :'\I I Ie.,

" 01 Defect, �r;tr('" c; [)

C:I\l' 10 I' " -1:i, 1

15 (1, -11 )� 'I ..'() (-1, ." " -1(, I

' ----,--Ca'-l' III (l), I), 5U.()

15 ('I 12 I --1-9.2

20 112. 12 ) ";;2 , ---- --

It can be seen that for 10 x 10 array" He achie\'e, almo,t I ()Oc;, success in all cases, The CD-7ero performs beller

than CD where the neurum are randomly turned on at the initiation of the algorithm, Greedy techniques such ilS RM fail to cover more than 67 c;, case, on the average, �or 2() x 20 arrays, RM's performance deteriorates, and on a\­erage in more than 80'7c of the case" it fails tn repair the memory,

To ,ummari/,e the simulation result" it may be pOITltL'd out that CD is faq in rcaching a spare allocation, and has small variance in the number of search ,tcp', On the other hand. the number of search ,teps used by the He can var" widel". depending on the initial random starting states and how difficult the problem instances are. But on average, the number of search steps needed hy He does not e,ca­late exponentially, �()r four out of six types of seltings. the average number of search steps for He is limited to about twice the number for GD, For the other two types of ,eltings, the ratios of the average lluillher of search steps are no worse than four. In fact. if we compare the equivalenl aI'em!;" /lulIlber (�l search sleps expected !<]f

------'-,- ---

HC (ill ---- ---------l)X.7 - I) (N.li h.()

qx q h h - ---- --1(11i II 13 U

l)) ; I' h Iilli ',I 12 7

He ------

1-l-1 1-1 II

2� .h --- -" h

-1 Ii , -2(,1 �

--,

!\BIE III

(;])

2,0

2,0 II)

2.h , 7 .' h

He

II,) IliA :2 1 �

I" I

GD \'\I)I{CI'\, \\Il�\(, �1\lHlk(JI SI\RCHSILl'�

( . .l�C I

Ci [) 11('

1-1 -l 14 II 22 6

GD

IJI.II 6.1(1 8H Y

lie

22 h 40 .' 2()-1

the GD approach to attain the ,ame level of success as the He approach docs within I '7c , the He approach is found \0 be more ellicient, The actual numbers can be seen in Tabk [IJ,

Y, EL I CTR():\IC IMPLlMENTATION In this section, we Llenllll1strate how to implement an

electronic neural network that can repair a faulty memory

by us ing the He searching strategy, Electronic neural nets can be built by both analog and digital techniques, Mur­ray and Smith have discussed their tradeofi's and devel-

Page 8: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

MAZUMDER AND YIH. VLSI MEMORY YIELD EKHANCEMENI

oped a digital neural net representing the interactions by pulse stream of different rates [22]. Even though the dig­ital network has better noise immunity. it requires large

silicon area for its pulse generators and digital multi­pliers. Analog neural n ets using current summing princi­plcs wcrc developed by Graf et al. [7J and Sivilotti et af. [24J. Carver Mead's book on analog VLSI presents the design strategies for several types of analog ncural sys­tems [20]. These analog neural net, require smail area and

their intrinsic robustness and ability to compute correctly even in the presence of component failures, are particu­larly useful features for large-scalc VLSI implementation.

In this design, a neuron is realized as a difference am­plifier which produces binary output voltages. The firing neuron state is represented by the high voltage output, and the nonfiring state by the low voltage output. The synaptic influences between neurons, as well as the biases to neu­rons, are represented by electrical currents. Thus, an ex­citatory influence propagated to a n euron can be realized by injecting current to the corresponding amplifier input. Similarly, an inhibitory influence propagated to a ncuron

can be realized by sinking current from thc corresponding amplifier input. As for the synaptic amplification, it is simulatcd by the branch resistance regulating the amount of current that passes through.

Note that for the array repair sparc allocation problem discussed here. all the synaptic weights of the intercon­

nection matrix are negative. Moreover. between two con­nected neurons, the synaptic weight of the connecting link can only be one of two different values, -A or - B. If we

assume that there are equal numbers of row and column neurons, an example electronic neural net with eight neu­rons can be implemented, as shown in Fig. 5.

According to Fig. 5, a n euron fires a negative influence to another neuron by turning on a transistor which sinks current from the target neuron's amplifier input. The amount of current involved is controlled by the size of the transistor. Note that owing to the assumption that equal numbers of neurons are reserved for row and column re­pairs, we are able to divide the interconnection matrix into four equal parts. It is not hard to find that only the first and third quadrants, as indicated in Fig. 5, need to be programmed based on the variable memory array defect pattern. For each programmable link, an additional tran­sistor controlled by a memory cell is added to the PLIll­down transistor. A programmable link can be discon­nected (connected) by storing a 0 (I) in the memory cell.

Fig. 6 show, the essential steps in programming the neural net's interconnection network. Let the memory ar­ray defect pattern be given in Fig. 6(a), with faulty cells represented by black dots. l'iext, by deleting fault-free rows and columns, a compressed defect pattern is ob­tained in Fig. 6(b), with a faulty cell denoted by a 1. Then the compressed defect pattern is used to program the first quadrant of the neural net interconnection matrix, and the third quadrant is programmed by the transposed com­pressed defect pattern, as is shown in Fig. 6(c).

Finally, the possibility of huilding the neural network

131

Fig . .5. An example electronic nCLIral net for memory repair

alongside an embedded memory to achieve this self-repair purpose is demonstrated by the schematic diagram shown in Fig. 7.

The design to be discussed assumes that a built-in tester is available to provide fault diagnosis information. First

of all. the status of each memory row and column is de­termined after the testing is donc, and this information is stored in the faulty-row-indicator shift-register (FRISR) and faulty-column-indicator shift-register (FCISR), with a faulty row or column indicated by a 1. Then, to com­press the memory array defect pattern, the detailed defect pattern of each faulty row is provided to the row-defect­pattern shift-register (RDPSR) one row at a time. As men­tioned in Section III, the characteristic matrix D is highly

sparse, and the fault pattern can be stored in the form of a compressed matrix each row and column of which will contain at least one faulty element. This is obtained by shifting those bits of RDPSR that correspond to the n on­zero bits of FCISR, into the compressed-row-defect-pat­tern shift-register (CRDPSR). The content of CRDPSR is then used to program a row (column) in the first (third) quadrant of the neural net" s interconnection network. The row (column) address of the quadrant is obtained by

counting the nonzcro bits in the FRISR. The stimulating biases to the row and column neuron inputs arc generated by counting the total number of faults in each row (col­umn) in the row (column) fault-count shift-registers. After a repair scheme is obtained by reading the outputs of the neuron amplifiers, the information is expanded in reverse order to the compression of defect patterns, and passed on to control logic of actual reconfiguration circuits. Nor­mally, laser zapping [23], (3), focused ion-beam [21], or electron beam [6] techniques are used to restructure a

Page 9: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

1 32

rcw number 0 56 297 651 871 10:23

" • • 1 1 0 0 0 1 1 0

351 • • 1 0 0 column ------+- 0 ------+-:lumber 0

0 0 1 0 0 0 1 1

1 0 0 0 E77 • 1 1 1 0 0 850 • • 0 1 1 1

0 0 0 1 1023

(a I ( 11 ) Fig. 6 . 1: \<llllpll' progrJI l 1 111 j l l � o j 1 i l L' l I\..' ll la \ l Iet fill 1l1l' l l l l l f) n,:;pd l r Lt ) Mcmor) array d c fc __ , t pattern ( h ) CnI1 1 r 1rc :"" n' d dd cl't rattL'fIl . I L' ) \ (' u l'd i f1c l\l.(lrk RA \1 n:! 1 hit map

addr. oecoder

'� '"""y caL "'1m" " inpJt- �hHL

enable �["e�"ed row e:; detect. pa::tern SR �

-r 4 i data butfel;

� 0 2 4xl02t1 memc:!:'y array

quad..-i1nt 1 RAM: ce : 1 � � ro.; (� defect pattern SR

1 [0'01 tau.l'_ count SR I '

the pr"'9.-anmaI:;le p;lrtion of the neul:"!!Ol npr ' s \nt pr,onn .. c-tlr.r nptwnrl<

1 :-(.1. CO.lnt SRI

faulty memory array where the l aser or charged beams arc llsed for b l ll w i ng nut the rmgrammable fuses to discon­nect the faul ty rows and col u m n s . These schemes cannot be emp loyed in automatic restructuri ng . Two viable tech­niques for s e l f- restructuring arc i ) e lectronical ly rro­grammable amorphous s i l icon fuses. which can be pr(l­g ra m med by applying a 20-V pulse [27]: and i i i

programmable electroni c recontigurat ion s\� itche s . which usually i m pose a smal l amount of penalt i e s on c i rl'u i t

speed and are a . r h e c i rc u it behavi o r w a s ver ified throu,;h SPI C E s i m ­

u l at ions . S i m ul at ion output for a compressed 4 x 4 dekct pattern w i th eight defect i ve memory cel ls i s shown in F i g .

S as an e xample . T h e defects arc represented b y shaded

squares . The i n i t i a l state of the neural net was L e m . i . e . there was no a l l ocation of spares for any faulty rows n r columns . S i nce every neuron represents e i ther a fau l ty ro w or a faulty c o l u m n i n t h e compressed defect patte rn . to cover the defects al l neurons i n i t i a l ly began t ransit ioll tn­ward the fi r i n g slate . A ft e r 3 ns. the neurons representing

mw 2 and column 4 were successful in competi t ion and remained in the tiring stat e . Th i s can be expla ined by the fact that row 2 and column 4 each hay e t h ree d e fects to give t he correspo n d i ng neuron the l argest pll', i t i v e bias t ( )­ward the fir ing state . A l l other neurons started to turn olf. due to the mutual d i scouragement propagated t hrough the negat i ve synapse s . W hen the remaining neuro n s were be­comi ng low i n act iv i ty . the mutual d iscouragement faetor was also weakening, and the neurons started to move to-

.......

1ll 0 0 tIllJ o � � msJ _;;> D D D � -� D IH D

::1

---'

[

i / \\ " ! � _ ' 300 __ . _

, _ _ " !(\ \ , I , \\ , ' ,

/ " 'I ! / '\ , ! 2.SO 1 -+f-1- --� � 1 '1 7 11 -� -- -- -,11/ I ' I, i / 200

-- - -L I l 00 -++-- t " f -0 50 __ 1 i j

.. ! �c�)�_ ---o oo - f.1----- - � - - , j�'",-- ---b ........

0 00 2.00 4 00 6 00 8 00 1000

ward the fi r ing s tate agai n . This t ime the neurons repre­sent i n g row 4 and column I w e re successfu l . The other four neurons then cont i n ued to the off state . s ince there was no remaining defect to cOYc'r and the spares were a l l w ,cd up

VI. '1 f, l ' !Vd N ! r B HI.\ v !O!< ! � FAULT CONDIT!ONS One unavoidable problem I II add i n g extra hardware for

B I S R is that the e x t r'a hardware i tsel f may be subject to

component fa i l u res . I n t h i s sec t i o n . we demonstrate the i ntrinsic faul t-t(l lerant capabi l i t y of neural networks as a reconliguration control u n i t in the memory repa i r c ircu i t .

Three types o f L'CllllpllIlcnt fai l ure ,> have been iden t i ti ed i n neural network s . namely sy napse-stuck faul t s , bias

ti uct uat ions. and neuron-,wck faults to serve as the fault mode l . For cach faulty s ynapse . ei ther of the synapt ic

Page 10: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

MAZUMDER A N D YIH VLSI M F M O R Y Y I eLD r N H ANCFMfNT

� OJ Q 1:' 2 en en '" u u " if>

'0 �

2 en en '" tl :J en

o

100 � ___________ _

so

60

<0 - Svn03ps@ stucK- r"l.ll�

B'as fluc:uatlons

Neur01 stuck-faults

20

Number of fau l t s j( -l l

100

80

40

� 20 ____ Synapse stuck-faults

"Jeuron stu(�-faults

Number of fau lts (b)

1 0 12

Fig 9 . Neural net performance:.. in fault conditions

weights wi} or I1Ji can be assumed to be stuck-at-x, where x is a positive number in the range of synaptic strength values, due to transistor-stuck faults or defective memory cell s that control the programmable synapse s . Faulty hias generators are modeled to fluctuate within one unit of the predetermined biases, and faulty neurons will have stuck­at-firing or stuck-at-nonfiring states.

Rather than rendering the entire neural network useless, these faults will only reshape the energy (cost) distrihu­tion over the set o f spare al location schemes . A llocation schemes, which correspond to complete covers of al l de­fective memory cel l s , may no longer be mapped to ac­ceptable local energy minima, thus prolonging the search for an acceptable solution. On the other hand, incomplete allocation schemes may be mapped to fal se acceptable lo­cal energy minima, causing the neural net to stop search­ing without success .

Identical memory defect patterns prepared in Section IV are used here . Random faults are incrementally in­j ected into a He type of neural net to examine the achiev­able percentages of repairs for repairable defect patterns . To distinguish the degrees of seriousness among different types of faults , we inject only faults of the same type. The simulation results are shown in Fig. 9 .

The results indicate that a smal l number o f synapse-

stuck faults, up to five in the 20-neuron network and ten in the 40-neuron network, have almost no effect on the average performance . But the bias fl uctuation faults and the neuron-stuck faults cause the average percentage of repairs out of repairable defect patterns to decrease stead­i ly to zero when over one - fourth of total neurons are af­fected. The difference between a bias fluctuation fault and a neuron-stuck fault is in the amount of influence given to a particular faulty row or column for repair . While a bias fluctuation fault wil l encourage or discourage the cor­responding row or column substitution sl ightly by one unit of influenc e , a neuron-stuck fault wil l actually insist the substitution be made.

VII . YI U.D AN ALY SIS

In this section , a quantitative analysis of yield enhance­ment due to neural network ' s self-repair capabil ity is done . Faults are injected into memory arrays , spares, and neural networks to compute the resulting y ield. The overhead of the self-repair logic is also estimated.

A well - known yield formula due to Stapper 1 251 . [26] is used here to calculate the original y ield, Yo, without the neural-net-controlled sel f-repair,

Yo = ( l + Ao/a)-o where 0 is the defect density , A is the memory array area, and a is some elustering factor of the defects . Let P, be the probabi lity function for a defect pattern to he repair­able with respect to the remaining fault-free spares and the fault condit ion of the self-repair circuitry , and B be the area of overhead. Then , the yield, YN, with neural­net-controlled self-repair can be calculated as fol lows:

YA = Y() + ( I - Yo ) P"

where

to

Yi, = [ I + (A + B)o / a] -r> . B y minor algebraic manipulation, i t can be seen that

Yil = 0

( Y. )" I + BoYIi /a I f a I then , the new yield formula can be s implified

( l - P,.) Yo Y v = P, +

I + BoYo . We used 256 x 256 (64-Kbit) memory arrays for sim­

ulation with a = I and Ao = 0. 2 5 , 0 . 5 , I , 2 , 3, 4, so that Yo = 80 % , 66. 7 % . 50 % , 33. 3 % , 25 % , and 20 % , respectivel y . S i x hundred memory arrays, each with at l east nne defect, arc generated accordingly . For each set of memory arrays , up to four spare rows and four spare columns were provided to examine the neural net repair­ability w ith respect to the number of available spares. The spares and the neural net components are all subject to the same degree of defect density as the memory array. The resulting function values of P, are shown in Fig. 1 0 . Fi­nal ly , the corresponding calculated y ields are given i n Fig . I I .

Page 11: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

I I I I I- T ! , - r k ·\ '\ :'J \C r l ( ) \ '1 ( ) \ ( " ( I \ l P l l )- R - \ ] DI- I ) [ ) I .\/( , '\ ( ) ) l \ l l ( i R \ l l · l) ( I R C I I " ] S \ '\ j ) .... ) \ I I \1'i. \ ( )L I :!- '< 0 I J \ '\' t A R ) ) lj1.)1

'0 100 QJ �

.� <II '" ao II) � if) C � 60 QJ .., .., oj '" 40 - dl,�(; . 2 S

---+-- ciA=O . 5 '-' U - dA.=-1 <lJ .. ---+-- li��7 QJ 2C '0 - d, L l

'H - dJI.�� 0 uA: th� expected nill"'be r of defect:'>

Number o f spares Fq;. 1 0 Rl'r(l ira h i I J t � ( ) I cI dci'l'ui \ l' �56 / 25h ( /)..+ K I 1 l l 1 ' ! T lOf) arr�i: "

100

Q) � eo '" .., c Q) U 60 � <1l '" co 4C . .., '0 '" <ll -< 20 ,..

----!t'- ('.A. � <I ::!A: �J:e eX?l2c�ed rllilTlber of detec:.s

Nurr�er o f spares

Accord ing to the ,chemati c d iagram ,hown i n F i g 7 e xcept for the FC/ R I S R �nd the R O PS R . w h ich arc pro­p0!1ionai to the d i mc ns ion of the memory array . thc l'O I l 1 -

p lex i ty of thc rema i n i ng se l f-repa i r I ( )g ic has i ca l ly de­pends on the d i rnenslons of the com pressed d c'fect pattern s . For insta n c e . an IJl x 11 compn.:s\cd defec t pal tern w i l l requ i rc a neural net o f / / I + 11 ne u ron s . S ince the mean and variance o f the n u m bcr o f faul ty column, ( n)ws ) can he ca l cu la ted ;Jccord ing to the fau l t d istrihution I 'unc­t i on . ; In cst imate on the req u i red nc ur;J 1 net size can he ea\i I y made t t ) accommodate nearl \ ;J I I poss i h l e COlll ­pressed defect pattern s . For the ca,e 0 1 2 5 6 x 256 ( 6-l- K ) memory array s with A o a s high a\ -l- . i t i , a ncar-ce n a i lllY that the compressed a rray w i l l not be l arge r than I () >< I ( ) Given an N x N O R A \1 . l e t the s iZe: of the max i mu lll compre" ed memory defect pat tern he I'� A n i tcm ized ac­count of the dy nam ic memory array and ,d l-repa i r hard­ware based on transis tor cou nt is g ive n in Tablc I V . and

the perccntages of overhead are l i ,ted in Tahle V for \iH­iou� l I Iemory and neu ra l net size" As i nd icated hy t he

res u l t s . the overhead is i ns ign i fica n t . compared with the y i eld i mprovement . and th e overhead can he even smaller

if static R A\1 · s . with 6Nc tran s istors in the lIlemory ar­ray . are considere d . Here the overh ead of the B I ST c i rc u it is not included . Typ ica l l y this overhead is very low ( ahoul 28 fl ip- flop s and ten gates for a 256-K R A M ) . a:, shown i n Mazu rnder' s earl icr papers [ 16 ] . [17 ] .

1 ,,\ I l I . L 1 \ \ \, I I I \ 1 1/ 1 1 1 .\( ( I [ "\ 1 ( I J \ '\ \' ' .'\ I·C\�f \Vr I H R I S R elR( \ r \

\kIllO[\ l l, l l ... Rm\ dL'l"UliL'r" ( 'n I U l" l1 dl'l'(hk, S,-' : l. '>l' : 1 1 I 1 [1 ' � l :d l � l t} ,-' l l [ IJ l ll l l J lldlCd t ( l f ,,) R 1 ' , 1 ' 1 1\ n m l T l d I L'a t ( ) ! S R I{(l\\ ddt'l"t pattern S f{ H.p\\ Lill i ! I.'oun! ,�R Column Lililt l t l l : ! l ! S R }< ,\ \1 \\nrd I l Tll' l'\ ) l I 1 1 t � >

C ( )'l lrrL' ..... ..,�,d n1\\ dl- I n'( l'atkTIl S R Pr\) .� .. !T�I ( l l !ll:I �l k ": n a r'> l>, S R ' ..... '\() f l -prn t: r' I T1l I 1 1 .d"'l , l' ") nap'c" Pr(\� [" I I ' l l l l c! h l L' ..,� n�tp"' l' ''' " ,-' U j" ( IT !"

l.-'.HLE \'

Numher \ ) 1' tr�lIl� i:-.t ( )r:-.

�,\ 2.\

2.\

il \. x.\ XX

X.\

K n log. 1/

�n log n x I ( )� II

X/I

1 6/1'

:3.11 ' -+1{ ' X/I

211 (-+ log " t i , )

�I I I - R ! l' \ 1 1-< H \ R D\\ \ R I o\ ! R H I- \ j ) I" (� u! R A M S i l l

/I � X /I � 1 2 /I = 1 6 /I = �() -----.. _- - - - - -

� 2 :;; (} [ fA Knit , , I I 5 hb (, , 3 5 7 . c O

\ � :'i l � 1 2 Sh Kb i l l c �h 2 :'iy 2 , 76 :3. . Y 7 \ = I IJ�� [ I \1 h l t l 1 , 2 [ 1 l . c .1 1 . 2t{ 1 . . 1 3

V I II Fl.'"-\ I R I \lARKS The hiolog ica l nervo u s sy stem's abil i ty to so l ve pe r­

ceptua l llpt imiLatinn proh l em s is i m itated here to tackle the VLSI array rcpai l p ro h l em In contrast to the cu rre nt seq ue nt ial repa i r a l gori t h m s . v.. h ich ru n slow l y on the c'o nv ent iona l d ig iu l computers , the n eura l network ' s col ­lect i v e computational propcrty prov id es very fast sul l! ­t i t l m . Methods to t ra nsfo rm the prob l em i nstance i nto neura l network computat ion model are demonstrated i n dcta i l . Of t he t wo ty pe s of neural nets studied i n this pa­per. the GD neura l network has been found to be tw o to four t imes hetter than the RM a lgorit hm i n obtai n ing suc­cess ful repair schemes . ThO' G O m i ni mize s t h e energy funl' l i o n of thc network only in the loca l ity of the stan i ng energy v a l u e , rhe performance of t he neural network is

funher i m p roved hy i n trod uc i ng an H C tech n ique t hat al­lows the ,earch to escape the traps of local m in ima . By generat i ng random defect patterns and experimenting with a l arge n u m her of arrays . it is seen t ha t the HC al gori th m finds a s o l u t ion in a rcpairable memory array w i t h near certa i n ty ( w ith a proh<l b i l ity of 0 . 9 8 or more ) . For the same fau l t pattern s , simp le cummerc ial algorithm s . l ike 1< \1 , can y ield fcas ib l e solutions for only 20% of the pat ­tern s . On the averagc . about t wice as many search steps a re used hy the H C as opposed (0 CD.

Both the HC and GO n eura l networks can be imple­mented i n hard ware u s ing very SITw l 1 overhead. typical ly Ie ", than 3 '1c if the mcmory size is 256 Kbit or more . The pay utf t)f t h is R I S R a pproac h is ve ry h i g h : the V LSI ch i p ! icld can i n c rease from I U 'k w i thout the BISR circui t to ahout I OO 'i; h) u s i ng the proposed neural net s . The paper a i s o pr()\ �s th at the ncu ra l networh a re mllch morc robust and fau l t - to l e ra n l than the co nvent ional log ic c i rcui ts . and

Page 12: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

MAZUMDER AND Y I H : VLSI :vIEMORY YIELD ENHANCEMENT

thereby are ideally suited for self-repair circuits. The pro­posed designs of neural networks can operate correctly even in the presence of multiple faulty synaptic circuit elements , and as more numbers of neurons become per­manently stuck to a firing or a nonfiring state, the net­works gracefully degrade in their abilities to repair fau lty arrays .

The paper shows how to solve a vertex -cover problem of graph theory , generally known to be an intractable problem, hy using neural networks. A large number of problems in other domains, which can be modeled in sim­ilar graph-theoretic terms, can also be solved hy the neural-nctwork designs discussed in thi s paper. In the memory repair problem, an entire row or column is re­placed to eliminate a defective cell. Such an approach is easy to implement and is cost-effcctive in redundant mem­ory designs because the cells are very small in size. But a largc class of array networks in systolic and arithmetic logic circuits employ a different strategy where a defec­tive cell is exactly replaced by a spare or redundant cell . An appropriate graph model for such an array repair prob­lem will be to construct a maximum matching of pairs between the vertices of a hipartite graph representing the set of faulty cells and the set of available spare cell s . The technique described in this papcr can be extended to solve the maximum matching algorithm by neural networks [18J .

The overall goal of the proposed BISR circuits is to improve the chip yield by reconfiguring thc faulty com­ponents at the time of chip fabrication, and also to extend the lifc span of the chip by automatical l y testing and re­pairing it whenever a failure is detected during the chip's normal operation. In space, avionics , and oceanic appli­cations where manual maintenance, namely fIeld testing and replacement, is not feasible, such an auto-repair tech­nique will be very useful in improving the reliability and survivability of computer and communication equipment .

RU'ERENCES

[ I ] J . R. Oay, "A fau lt-driven comprehensive redundancy algorithm for repair of dynamic RAMs. " IEEE Desigll Ulld lest, vol . 2. pp 35-44, June 1985

[2] R . C . Evans , "Testing repairable RAMs and most ly good memo­rie s . " in Proc. Inr. TeSI ConI , Oct . 1 98 1 , pp. 49-55,

1 3 1 I:!. f . fitzgerald and E . P . Thoma, "Circuit implementation of fus ible redumlant addresse� of RAMs for productiv ity enha ncement . " IBAf 1. Res. Develop . . vol . 24, pp. 29 1 -298, 1 980.

14J W. K. fuchs and M . -F. Chang, . . Diagnosis and repair of l arge mem­ories: a critICal revie\\' and recent results . " Defect and Fault T(}ler� (1l1ce ill VLSI 5yslem.\·. I. Koren, Ed. New York: Plenum, 1989, pp. 2 1 3-225 .

[5J M. R. Garey and D. S. Johnson. Compulers and Il1lract(1hilitv: A Guide to the Theon' of NP-Complereness. New York: Freeman, 1 979.

[6] P . Girard, F. M. Roche, and B . Pistoule\' " Electron beam effects on VLSI MOS: condit ions for testing and reconfiguration . " in Wafer­Scale Integralion, G. Sauc ier and J. Trih le . Eds. New York: Else ­vier. 1 9 86, pp. 30 1 -3 1 0 .

17] H . P . Graf and P. deVegv ar. " A C M O S implementation o f a neu ral nemork model , " in Advanced Research in VLSI, pp. 3 5 1 - 367, 1 987.

[ 8 1 H . P , Gra t' el al . . "VLSI implementation of a neural network memory with several hundreds of neurons . " i n Proc. ConI .1\/eural Net�vorb for Computing, Amer. Inst. of Phys . , 1 9Ro, pp. 1 82- 1 87 .

135

[9] R . W. Haddad and A. T Dahhura, " Increased throughput I'or the testi ng and repair of RAMs with redundancy . " in Proc. Iiu. COllI Computer-Aided Design , 1 987, pp. 230- 2 3 3 .

1 1 0] :'01 . Hasan a n d C - L Lill . " .\1inimum fault coverage i n reconhgurable arrays , " in Pro£. In/. Srlllp. FCHllt-Tolerant Comput . , June 1 98 8 . pp. 348- 353

[ I I J J . J . Hopfield, "Neural networks and phys ical systems with emergent collective computatiunal abIlIties , " in Proc. Nal. A('(u/. Sci. USA . vol . 79 , Apri l 1 9 R 2 . pp. 2554-2558.

[ 1 2 J J . J . Hoptield and D . W. Tank, " Nellral computation of decisions in opt imization problem s , " Bioi. CY"ern . . vol . 52 , pp. 1 4 1 - 1 52. 1 985.

[ 1 3 \ S . K irkpatrick . C D . Gelatt. and M. P. Vecc hi, "Opt imizat ion hy simu lated an nea l ing, " Srience, vol . 220, pp. 67 1 -680, 1 98 3 .

[ 1 4] S . Y . Kuo and W . K . Fuchs , "Efficient spare allocation i n recon­figurahle array s , " IEEE Design and Test . vol . 4, pp 24-3 1 , 1 98 7 .

[ 1 5J W . -K . Huang el ul . . " Approache, for t h e repair o l' VLSI/WSI RA/I,b hy row/co l um n deletiDn , " in Proc. Int. Symp. Fau/I- To/erll!/f C()m� pul . . June 1 98 8 . pp. 342 347 .

1 1 6] P . Malumder, 1 . H . Pate l . and W K . Fuchs, " Methodologies for testing embedded content addn::��Jble I1lcmones , " IEEE Tral1.L Com­purer-Aided Des/gn , vol . 7, pp . 1 1 - 20, Jan . 1 98 8 .

1 1 7\ P . Malumde r and J . H . Pate l , . . An efficient huilt-in self-testing a l­gorithm for random�acce!-.s II1�mury. " IEEE TrmH. Ind. Electroll . . vol. 3 6 , May 1 9 89 , pp 194-407.

[ 1 8 J P . Mazumder and Y. S . Jih, " Processor array sclf-reconfiguration by neural networks . " i n ProC. IEEE Wafer-Swle Integralion . Jan . 1 992 .

1 1 91 W. S. McCulloch and W. H . Pitt' , " A logica l calculu, of ideas im­manent i n nervous actIv i ty , " BIIII. Malh. Bioi" vol . 5, pp. 1 1 5 - 1 3 3 , 1 943.

1 201 C Mead. " Analog VLSI and Neural Systems . " Read ing , \1 A : A d ­di"m-Wcsley, 1 989.

[ 2 1 J J. Melngai l is . " Focused ion beam technology and appl ications." 1. Va,- Sci. Technol. R . vol. 5, pp. 469-49 5 , 1 987.

1 22] A . F . Murry and A. V W . Smith , " A synchronous VLSI neural net­works us ing pube� .... tream arithmt:t lc . · · IEEE 1. Solid�State Circuits. vol 23, pp 688 697, 1 98 8 .

1 2 3 1 G . Nicholas , " Technical and econOlmcal aspect of laser repai l of WSI memory , " in Waf('r�Sca{e Il1te�,.(/ti(m, G. Saucicr and ] . Trihle . Ed�. New York: Elsevier, I 'lR6. pp. 27 1 -1 80.

1 24 1 M. A. Sivilotti , M. R . EmerI Ln�, and C. A. Mead. " V LSI archnec­tures for implementation of nL:ural l1et\Norks , " Proc. COllI !\jeural Networks for Computing, Amer. Inst. of Phys . . pp. 408-4 1 .1 , 1 986.

1 2 5 1 c. H. Stapper, A 1\ McLa ren , and M. Dreckman. "Yield model tor prodUClivity optill11zation of VLSI memory chips w ith redundancy and

partially good product , " IBM 1. Res. Del·c1op . , v ol . 24, no. 3, pp. 398-409, May 1 980.

[26] C'. H. Stapper, "On yield , fault distributions, and clustering of par­tic les . " IBM 1. Res. Dcw·lop . . vol . 30, no. 3, pp. 326-3.l 8 , May 1 986.

1 27 1 H. Stopper, 'A wafer " i th e !cctnea l ly programmable interconnec tions . " Dig. IFf:f: lilt. Solid SIM(, CireuilS ConI , 1 985, pp. 268-269.

1 2 R 1 M. Tarr, D . Boudreau, and R. Murphy, " Defect analys is system !-Ipeeds test and rcpa ir of redu ndant Il)e moric s , " E{eclrolllcs. pp. 1 7 5 -1 79 , J a n . 1 984

129] C - L . Wey ('t ai , "On the repair uf redundant R A M ' s , " IEEE Trans. Compuler-A ided Onign . vol . CAD-6, pp. 222-23 1 . 1 98 7 .

Pinaki MalUmdcr ( S ' 84- M ' 8 7 ) received the B . S . E . E . degree from the Indian Institute of SCI­ence in 1 976. the M . S c . degree in computer sci­ence from the University of Alberta, Canada in 1 985 , and the Ph . D . degrec in electrical and com­puter enginee ring from the University of I l l inois, Urbana-Champaign. in 1 9R7. Presently he is an Assistant Professor at the Department of Electri­cal Engineering and Computer Science at the Un i­versity of Michigan, Ann Arbor.

Prior to th is . he was a research assistant at the CoordLnated Sc ience Lahoratory . University of l1iinois, and for over six years was with the !lharat Electronics Ltd . , India, (a collaborator of RCA)

Page 13: A New Built-In Self-Repair Approach to VLSI Memory Yield …web.eecs.umich.edu/~mazum/PAPERS-MAZUM/27_Yield... · 2012-04-25 · A New Built-In Self-Repair Approach to VLSI Memory

I H E I R ·\ i\ .' �(' I ! ( )1\ .' Of; C ( ) \ 1 P I T I R · . \ I I l F D I l I S I ( , i\ OF I YI Hdn l l 1 l ( ' f Ret ' I I S A ' I J '\SI I \h . \ 01 1 2 . 'i() I . j .\i\l I,\R\ I 'N.'

I,\Ihere he developed several type� of analog and o l g ital lntegrJtcd c i rcuit ;.. for consumer electronics product � . During the �umll1er� of 1 9R5 and 1 986. he wa, M ember of the Technical Staff i n the Naperv i l l e branch of AT&T Bell Labomtorie� . H i s rc<.;earch interest l n c l w .. ks VLSI te,..; t i n g . phY�lcal de­sign automation. ultralast digital c i rc u i t des ign. and neural network�. H� l� a reC Ipient of D l g it3 1 ' � Incentive" for Excellence A\\fa r"u, Natlllnl.:li Sci ­ence Foundation Research In i tiatIOn Award, and Bel l � o rthern Research Laboralory Faculty Award . He has published '''er 50 pape" In IEEE and ACM archival journals and revin ... ·cd internatIOnal c n n ference proceeding ..... , He is a Guest Editnr of the IEEE 0 1 .51(" '>ill T c.s I M .\( ; ' /I " " ,pectal is�uc on memory tes t Ing, to be publ ic.;hed i n 1 i)(J .1 .

Dr. Mazumder i, a m e m b " c , ) 1' S i g m a X L Phi K " ppa Phi a n d A C \1 SIGDA.

Yih-Sh� r J ih is '85- M '9 1 l received the B . S de­gree I n cornputl'r "clenee and i n f() [ matinn t' ngi­nccnng t rdlll Nation Taiwan U m v e rc.;i t} . Taip ei . Tai " ,l n , Repunlic "f C hina. in 1 9 R I the M . S . d e­gree in computer "iCH.'nce from !'vh .:hlgan State U n l \ cr"l l . East Lan,ing, I n 1 9 8 5 , and the Ph . D . degree in COlllputL'f .,,(ienee and engi neering from the Uni\'cr:-.it) �)t M I c h Igan , Ann Arbo r . i ll 1 9 9 1

He 1 :-. ( u ncnlh a Re�earch S w tf Member in the IBM T J. \Val

�..,on Research Center. Yorktown

Height'> . N Y 1 1 1 :-' rc�earch lnten:�h. incl uue VLSI l ' l l lllputcr-aldcd de�lgn. LIl;lt-to!craIlt l' \ ) fJ lput ing. ami h q;! h -bancl\\' iLi t h

(\1 IllPU1\::[ l"lHTHIIUnic�ltl(}n.