performance analysis and comparison of a

Upload: john-leons-e

Post on 13-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Performance Analysis and Comparison of A

    1/22

    Published in IET Nanobiotechnology

    Received on 20th January 2009

    Revised on 11th May 2009

    doi: 10.1049/iet-nbt.2009.0002

    ISSN 1751-8741

    Performance analysis and comparison of aminimum interconnections direct storagemodel with traditional neural bidirectional

    memoriesA. Aziz BhattiSchool of Science and Technology, University of Management and Technology, Lahore, Pakistan

    E-mail: [email protected]

    Abstract: This study proposes an efficient and improved model of a direct storage bidirectional memory,

    improved bidirectional associative memory (IBAM), and emphasises the use of nanotechnology for efficient

    implementation of such large-scale neural network structures at a considerable lower cost reduced

    complexity, and less area required for implementation. This memory model directly stores the X and Y

    associated sets of M bipolar binary vectors in the form of (M Nx) and (MNy) memory matrices, requiresO(N ) or about 30% of interconnections with weight strength ranging between +1, and is computationally

    very efficient as compared to sequential, intraconnected and other bidirectional associative memory (BAM)

    models of outer-product type that require O(N 2) complex interconnections with weight strength ranging

    between +M. It is shown that it is functionally equivalent to and possesses all attributes of a BAM of outer-

    product type, and yet it is simple and robust in structure, very large scale integration (VLSI), optical and

    nanotechnology realisable, modular and expandable neural network bidirectional associative memory model in

    which the addition or deletion of a pair of vectors does not require changes in the strength of

    interconnections of the entire memory matrix. The analysis of retrieval process, signal-to-noise ratio, storage

    capacity and stability of the proposed model as well as of the traditional BAM has been carried out.

    Constraints on and characteristics of unipolar and bipolar binaries for improved storage and retrieval are

    discussed. The simulation results show that it has logeN times higher storage capacity, superior performance,

    faster convergence and retrieval time, when compared to traditional sequential and intraconnected

    bidirectional memories.

    1 Introduction

    The ability to recall memorised information using distortedor partial associations is an unusual characteristic of abiological brain. Over the recent years, a number of modelsof bidirectional neural associative memories, mostly basedon outer-product operation, have been proposed. Thestorage capacity of such a memory is spatially distributedthroughout the memory structure. Such neural network

    models have received a considerable attention because theycan be used as an associative memory as well as neuralsystems for solving difficult optimisation problems [1, 2].

    Even though these memory models such as the Hopfield,Kosko and Simpson among others are deficient in severalrespects [1, 38]. By virtue of the outer-productconstruction, these memories store the complementary

    vectors, and are also inflexible, non-modular and rigid instructure. For hardware implementation of these neuralnetwork models, the number of neurons that can beintegrated on a chip is severely limited by the area requiredfor interconnections [9, 10]. Therefore these models are

    unsuitable for implementation in optical or VLSI hardware.However, it is discussed that the neural network structuresare rather large and complex, and therefore for improved

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 81

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    2/22

    efficiency and lower cost, may be implemented usingnanotechnology [1115]. The addition or deletion of apair of vectors requires changes in the strength ofinterconnections of the entire memory matrix, which beingfixed and rigid in structure will not allow such changes.

    Additional problems include the storage of false and

    complementary memories; computation of complex costfunctions [16, 17], the occurrence of a zero input to thethreshold function resulting from the iterativecomputational process during an iteration beforethresholding, the high degree of interconnections and therequirement that the length of the set of vectors to bestored should be about six to seven times the number of

    vectors to be stored [1, 18]; so that they should beapproximately orthogonal. Therefore the Hopfield, Koskoand other similar models, even though they have beenimplemented in hardware and software [9, 19 21], are notsuitable for use as associative memories for practical

    applications.

    The difficulties associated with the traditional neuralnetwork content addressable memory models have led toexplore and develop the new and improved models.

    However, neural networks are information processingsystems that are expected to process information in amanner similar to a biological brain in which a neuron isconnected on average to only a small fraction of otherneurons. Also, the interest in the economicalimplementation and the efficient use of resources such ascommunication links further suggests the importance ofmodels with minimum interconnections, or partiallyconnected models, because these topologies are systematicin structure and can be used to scale up to large systems.Hence, there is a significant motivation to investigate thecomputational capabilities, quality of retrieval, convergencetime, reliability, stability and the storage capacity ofmemory models with minimum interconnections.

    2 Need of nanotechnology forimplementation of neural networks

    The neural network structures, being horrendously complexand computationally intensive, are difficult to implement

    with todays technology such as VLSI or optical hardware[11, 12, 15]. Therefore to investigate the potential ofnanotechnology for the implementation of neural networkstructures, it is essential to gain some insight into thestructural properties of neural networks that are importantfor implementation. These are discussed first and then thepossible implementation thereof using nanotechnology isdescribed.

    2.1 A note on neural network structures

    A neural network is an information processing network,which is inspired by the manner in which a human brainperforms those logical operations of mind that performs

    reasoning to conclude a solution to a specific task or afunction of interest. A true neural networkthe human brainis collectively and massively parallel, and thereforeadaptable and very fast computationally. However, theneural networks now can be applied to those computationalareas that were once thought to be the exclusive domain of

    human intelligence. A neural network computationalsystem creates synaptic connections between processingelements, which are equivalent to neurons of a humanbrain. Neural networks are, therefore, modelled using

    various electronic circuits that mimic the characteristics ofhuman nerve cells [11, 12]. In general, neural networksystems are composed of many non-linear computationalelements that collectively operate in parallel to producesolution to a given problem. These neuron-like nodes,connected to each other via synaptic weights, can output asignal based on the sum of their inputs, the output beingthe result of an activation function.

    The areas that show great potential for neural networkapplications include pattern classification tasks such asspeech and image recognition, associativity, noise filtering,NP-complete problems and non-linear optimisationinvolving several variables. These areas of applications areexceedingly difficult for conventional computers and dataprocessing systems. One such application of greatimportance is the associativity, which forms the basis ofdevelopment of various associative memory structures [9,1921]. One such structure, considered in this paper, isthe associative memory of direct storage type that requiresO(N) or about 30% of interconnections when compared toan associative memory of outer-product type which iscomplex and completely connected model that requiresO(N2) interconnections. The complexity and size of thisproblem becomes horrendous when N, the length in bits ofthe information vectors to be stored, becomes large, and itis hoped that the nanotechnology-based implementationsof neural networks will provide an acceptable solution tosuch problems[1115, 22].

    Neural network-based solutions that have been developedto date are largely software based.

    Software simulations, however, are slow, and when thenetworks get larger (and therefore more powerful anduseful), the computational time becomes enormous. Forexample, networks with 10 000 connections can easilyoverwhelm a computer. In comparison, the human brainhas about 100 billion neurons, each of which is connectedto about 5000 other neurons. On the other hand, if anetwork is trained to perform a specific task, perhapstaking many days or months to train, the final useful resultcan be etched onto a piece of silicon and also massproduced[1114, 22].

    Because software simulations are performed onconventional sequential computers, they do not takeadvantage of the inherent parallelism of neural network

    82 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    3/22

    architectures. Consequently, they are relatively slow. Onefrequently used measurement of the speed of a neuralnetwork processor is the number of interconnections it canperform per second. For example, the fastest softwaresimulations available can perform up to about 18 millioninterconnects per second. Such speeds, however, currently

    require expensive super computers to achieve. Even so, 18million interconnects per second is still too slow to performmany classes of pattern classification tasks in real time.

    These include radar target classifications, sonar targetclassification, large-scale multidimensional andmultiassociative memory structures, automatic speakeridentification, automatic speech recognition, electro-cardiogram analysis and so on.

    2.2 Use of nanotechnology forimplementation of neural networks

    The implementation of neural network systems has laggedbehind their theoretical potential because of the difficultiesin building neural network hardware. This is primarilybecause of the large number of neurons and weightedconnections required. The hardware systems are oftencustom designed and built to implement one particularneural network architecture and are not easily, if at all,reconfigurable to implement different architectures. A truephysical neural network chip, for example, has not yet beendesigned and successfully implemented.

    The problem with pure hardware implementation of a

    neural network with todays technology is the inability tophysically form a great number of connections and neurons.On-chip learning can exist, but the size of the network

    would be limited by digital processing methods andassociated electronic circuitry [11 14, 22]. One of thedifficulties in creating true physical neural networks lies inthe highly complex manner in which a physical neuralnetwork must be designed and built.

    Most researchers believe that a solution to creating a truephysical and artificial neural network lies in the use ofnanotechnology and the implementation of analogue

    variable connections. The term Nanotechnology generally

    refers to nanometre-scale manufacturing processes,materials and devices, as associated with, for example,nanometre-scale lithography and nanometre-scaleinformation storage. Nanometre-scale components findutility in a wide variety of fields, particularly in thefabrication of micro-electrical and micro-electromechanicalsystems (commonly referred to as MEMS). Micro-electrical nano-sized components include transistors,resistors, capacitors and other nano-integrated circuitcomponents. MEMS devices include, for example, micro-sensors, micro-actuators, micro-instruments, micro-opticsand the like[1115, 22].

    In general, nanotechnology presents a solution to theproblems faced in the rapid pace of computer chip design

    in recent years. According to Moores law, the number ofswitches that can be produced on a computer chip hasdoubled every 18 months. Chips now can hold millions oftransistors. However, it is becoming increasingly difficult toincrease the number of elements on a chip using presenttechnologies. At the present rate, in the next few years, the

    theoretical limit of silicon-based chips will be reached. Thenumber of elements, which can be manufactured on a chip,determines the data storage and processing capabilities ofmicro-chips. New technologies are required, which willallow for the development of higher performance chips.Equally valuable is the development of far less complexmethods, algorithms and models that require reducednumber of interconnections, neurons and less computations

    while yielding the same or equivalent functionality,performance and the computational abilities to the existingones but at a considerable lower cost, reduced complexityand less hardware needs. One such model of a neural

    associative memory is proposed in this paper.

    Present chip technology is also limiting when wires need tobe crossed on a chip. For the most part, the design of acomputer chip is limited to two dimensions. Each time acircuit must cross another circuit, another layer must beadded to the chip. This increases the cost and complexity,and decreases the speed of the resulting chip. A number ofalternatives to standard silicon-based complementary metaloxide semiconductor (CMOS) devices have been proposed.

    The common goal is to produce logic devices on ananometre scale. Such dimensions are more commonlyassociated with molecules than integrated circuits [9, 12,13, 15].

    Integrated circuits and electrical components thereof,which can be produced at a molecular and nanometre scale,include devices such as carbon nanotubes and nanowires,

    which essentially are nanoscale conductors(nanoconductors). Nanoconductors are tiny conductivetubes (i.e. hollow) or wires (i.e. solid) with a very small sizescale (e.g. 1.0 100 nm in diameter and hundreds ofmicrons in length). Their structure and fabrication havebeen widely reported and are well known in the art.Carbon nanotubes, for example, exhibit a unique atomic

    arrangement and possess useful physical properties such asone-dimensional electrical behaviour, quantum conductanceand ballistic electron transport[1113, 22].

    Attempts have been made to construct electronic devicesutilising nano-sized electrical devices and components. Forexample, a molecular wire crossbar memory is constructedfrom crossbar arrays of nanowires sandwiching moleculesthat act as on/off switches.

    Neural networks are non-linear in nature and naturallyanalogue. A neural network is a very non-linear system, in

    that small changes to its input can create large changes inits output. The nanotechnology has not been applied to thecreation of truly physical neural networks.

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 83

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    4/22

    Based on the foregoing, the potential of a physical neuralnetwork that incorporates nanotechnology is a solution tothe problems encountered by prior art neural networksolutions. However, further research is needed to ensurethat a true physical neural network can be designed andconstructed without relying on computer simulations for

    training or relying on standard digital (binary) memory tostore connections strengths.

    3 Retrieval and storageconstraints in traditional sequentialBAMs

    Consider the two associated sets Xand Y, each consisting ofMbipolar binary vectors ofNx-bit andNy-bit long. The X

    m

    and Ym form1, 2, . . . ,M form an associated pair (Xm,Ym), of vectors that are stored in a sequential or parallel

    BAM. In general, the lengths of the vector pairs to bestored fall far short of Hopfields conjecture that thelengths of vectors to be stored should be about six to seventimes the number of vectors M to be stored. As a result,the resolution among vectors reduces which degrades thequality of retrieval.

    In addition, when M number of memory vectors arestored, the retrieval of a desired Xk, Yk

    for km

    associated pair of memory vectors will be disturbed by thenoise introduced by other M2 1 pairs of (Xm, Ym) fork= m, and m 1, . . .,M, memory vectors. Therefore thememory vectors too close, in terms of Hamming distance,

    to each other will introduce greater amount of noise thatmakes the retrieval difficult. Consequently, for improvedquality of storage and retrieval, some researchers[35, 7, 8,16, 18, 19, 23] have suggested the following guidelines orrestrictions.

    3.1 Hopfields storage constraint

    Hopfield, through extensive simulation studies, found thatfor the binary vectors to be pseudo-orthogonal, the lengthof binary vectors should be about six to seven times thenumber of vectors, M, to be stored [1, 18]. Therefore tosatisfy the Hopfields condition, the limit on number of

    vectors that can be stored is given as

    N the length of vectors (7 or 6)M. Then

    MN7/M/

    N

    6 (1)

    whereNis the length, in bits, and Mis the number of binaryvectors to be stored.

    The most of the sequential bidirectional memories,reported in the literature[6 8, 10], in general, fall far short

    of Hopfields condition. Therefore the additionalrestrictions, such as continuity condition, are imposedwhich restrict their applications.

    3.2 Orthogonality and continuityconstraints

    In sequential bidirectional memories, the length of storedvectors, generally, falls far short of Hopfields condition. Asa result, the signal-to-noise ratio (SNR), storage capacity

    and the level of orthogonality or resolution among themreduces.

    Therefore the fundamental constraint for improved qualityof retrieval is that the memory vectors to be stored should beapproximately orthogonal and that the Hamming distanceshould be uniformly distributed along the lengths of both

    vectors Xm and Ym for m1, 2, . . . ,M, forming anassociated pair of vectors.

    This requirement is termed as continuity constraint and fora sequential BAM is given as [4, 5, 7, 8, 17, 23, 24],

    1

    NxH(Xi,ji)ffi

    1

    NyH(Yi,Yj) for i,j1, 2, . . . ,M (2)

    where Nx, and Ny are the dimensions of Xm and Ym,

    respectively, andH(,) denotes the Hamming distance.

    4 Characteristics of unipolar andbipolar codings

    In neural associative memories the information is stored inbipolar form and the basic decision-making element, called

    a neuron, has two states, on or off (1 or 2

    1). These twobinary states can be represented by the bipolar binarycoding that uses1 and 21, or unipolar binary codingthat uses1 and 0. It is shown that, in the context ofneural network computing, the bipolar binary coding is farsuperior to that of unipolar coding. The use of bipolarcoding yields the improved SNR, storage capacity, the levelof orthogonality among the memory vectors to be storedand the quality of retrieval.

    In neural network associative memory models, such as thoseof direct storage and outer-product type[4, 5], the informationto be stored is coded in the form of bipolar binary vectors that

    are arranged as a set ofMbipolar binary vectors ofN-bits long.The length of vectors in bits should be about six to seven timesthe number of vectors to be stored. In the context of directstorage and outer-product type of memory matrices, theretrieval process is a function of the amount of cross-correlation. For better resolution and improved retrieval, theinformation vectors to be stored should be coded usingbipolar binary coding and the memory vectors so obtainedshould be approximately orthogonal[4, 5].

    4.1 Orthogonality characteristics ofunipolar and bipolar codings

    For a set of binary coded vectors, some of the necessaryconditions to achieve the maximum level of orthogonality

    84 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    5/22

    among the N-bit long vectors are that the number ofcomponents of each polarity in each of the N-element

    vectors should be equal and expressible as 2N21. Themaximum number of binary vectors M having an equalnumber of components of each polarity that can begenerated from Nbits, where N is even, can be written as

    [4, 5]

    M N!N=2! 2 (3)

    ForN 4, there exists a set of six vectors in which one half ofthe vectors are the complement of the other half. The cross-correlation (inner product) of unipolar and bipolar versions,respectively, of these six vectors with themselves, showingthe level of orthogonality among them, can be written as(see equation at the bottom of the page)

    where Cu and Cb are the cross-correlation matrices of therespective unipolar and bipolar binary vectors.

    The off-diagonal elements of Cushow that the noise levelis at least one-half of the signal strength (the values of thediagonal elements) when unipolar coding is used. Thediagonal elements of Cb indicate that the signal strength isdoubled and the noise level (the values of the off-diagonalelements) has reduced to zero when bipolar coding is used.Clearly, for the same signal-strength to noise-level ratio,the storage capacity is much higher when bipolar coding isused.

    4.2 Retrieval characteristics of unipolarand bipolar binaries

    It is shown that the use of the bipolar retrieval key has theeffect of doubling the length of the probe and stored

    vectors, making the number of1s equal to the number of21s in the stored vectors and making the number of 0sequal to the number of 1s in the probe vector, even if theoriginal set of stored vectors have an unequal number of1s and 21s and the original probe vector has an

    unequal number of 0s and 1s. Consider a memory matrixW of outer-product type and a unipolar probe vector ^X.

    The estimate ~Xof a stored vector, when bipolar version of^Xis used, can be written as [4, 5]

    ~X

    W 2^X

    U

    or

    ~X W Wc ^X^X

    c

    (4)

    where U is a unity column vector, ^Xc U ^X is thecomplement of ^X and Wc Wis the complement of W.

    Although the storage requirements and the original vectorlengths remain constant, the retrieval process perceives thelengths of the stored and probe vectors as doubled and theelements of each polarity are equal in both vectors. Notethat these benefits are achieved by utilising the complement

    vectors of the stored and probe vectors.

    In unipolar coding, however, the noise level is muchhigher, but it does not allow the storage of complementary

    vectors, which is the case when bipolar coding is used inconstructing the memory matrix W of outer-product-typemodels.

    4.3 A note on a probe vector

    A probe vector is an initial input vector that, in general, maycontain incomplete, noisy or partially erroneous informationand the remaining bits being unknown are set to zero, andis used to initiate the retrieval of a memory pattern storedin an associative memory.

    4.3.1 Minimum limit in bits on the probe vector:In neural associative memories, the information is stored inbipolar form, and the basic decision-making element, calleda neuron, has two states, on or off (1 or21). Therefore atleast two bits are essential to distinguish between two

    Cu

    0 0 1 10 1 0 1

    0 1 1 0

    1 0 0 1

    1 0 1 0

    1 1 0 0

    2666666664

    3777777775

    0 0 0 1 1 1

    0 1 1 0 0 1

    1 0 1 0 1 0

    1 1 0 1 0 0

    26664

    37775

    2 1 1 1 1 01 2 1 1 0 1

    1 1 2 0 1 1

    1 1 0 2 1 1

    1 0 1 1 2 1

    0 1 1 1 1 2

    2666666664

    3777777775

    Cb

    1 1 1 11 1 1 11 1 1 1

    1

    1

    1 1

    1 1 1 11 1 1 1

    2666666664

    3777777775

    1 1 1 1 1 11 1 1 1 1 1

    1

    1 1

    1 1

    1

    1 1 1 1 1 1

    2

    6664

    3

    7775

    4 0 0 0 0 40 4 0 0 4 00 0 4 4 0 00 0

    4 4 0 0

    0 4 0 0 4 04 0 0 0 0 4

    2666666664

    3777777775

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 85

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    6/22

    states. Also, if a total number ofNbits in the form of1 or21, excluding zeros, are known, then it can be postulatedthat N=21 are the minimum number of correct bitsrequired in a probe vector to retrieve an information

    vector from the memory [4, 5]. Therefore the minimumnumber of bits, Nm, required to retrieve an entity can be

    computed as

    Nm N

    2

    1 (5)

    where d e is a ceiling function.

    ForN 1 or 2, the minimum number of bits Nm 2.

    For reliable operation, the signal-to-noise ratio, g SNR,

    must be much grater than 1.

    The number of memory vectorsMthat can be stored andretrieved are given as

    ffiffiffiffiffiM

    p ,

    ffiffiffiffiffiN

    p

    g

    (6)

    where b c is a floor function and g ffiffiffiffiffiffiffiffiffiffiffiffiN=Mp [25, 26].ForN 2 and g. 1, (6) gives M 1.

    Therefore with M 1, the minimum length in bits of aprobe vector is 2 or greater.

    However, error correction capability for any associativememory is vital. For correction of one error, the minimumHamming distance among vectors must be at least 3or greater, but Nm 2 does not have any error correctionability.

    5 Structure and implementationof a conventional sequential BAM

    LetXand Ybe associated sets ofMbipolar vectors of lengthsNx and Ny bits long, respectively. Each of X

    m and Ym

    vectors forms an associated pair, for all m 1, 2, . . ., M.The bidirectional memory matrix, W, of a conventionalBAM, is constructed by performing the outer-productoperation on the two associated sets X and Y of bipolarbinary vectors.

    Next, for the purpose of comparison, the storage

    algorithm, the retrieval process, SNR, storage capacity,interconnection requirements and stability analysis of thetraditional sequential BAM is carried out.

    5.1 BAMs storage algorithm

    The storage algorithm for the memory matrix W of aconventional BAM can be written in the expanded form as

    W

    X11

    X12

    .

    .

    .

    X1Nx

    26666643777775 Y

    11 Y

    12 Y1Ny

    h i

    XM1

    XM2

    .

    .

    .

    XMNx

    26666643777775

    YM1 YM2 YMNy

    h i (7)

    The memory matrix Wcan be written in the compact form as

    WXMm1

    Xm

    Ym, t (8)

    Wt

    XMm1

    Ym

    Xm, t (9)

    where t signifies the transpose of a vector matrix.

    Wi jXMm1

    Xmi Ym, t

    j , i1, . . . ,Nxj1, . . . ,Ny (10)

    where

    Xm (Xm1 , . . . ,Xmj , . . . ,XmNx) (11)

    and

    Ym (Ym1 , . . . ,Ymi , . . . ,YmNy) (12)

    5.2 Retrieval analysis

    Let ^Xk is an imperfect probe vector which is closest, in termsof Hamming distance, to Xk that forms an association pair

    with Yk, and is one of the stored pair of vectors.

    The initiation of the retrieval process starts at an initialtime t0 when a probe vector ^X

    k is applied as input to thememory matrix Wand it terminates when the current stateafterpnumber of iterations becomes equal to the previousstate at (p2 1)th iteration, where p 1,. . ..,P is theiteration index. The total output estimate, ~Y

    k

    i, of the ith bitof Yk is given as

    ~Yk

    i SgnXNxj1

    Wij ^Xk

    j

    " # (13)

    or

    ~Yki Sgn

    XNx

    j1

    XM

    m1Ymi X

    mj

    " #^Xkj

    " # (14)

    86 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    7/22

    The conventional BAM is sequential in nature and is showninFig. 1.

    Similarly, the estimate of the ith bit ~Xk

    i, when Yk, an

    imperfect probe vector closest, in terms of Hammingdistance, to Yk which forms an association pair with Xk,

    and is one of the stored vectors, is given as

    ~Xk

    i SgnXNyj1

    Wij Yk

    j

    24

    35Sgn XNy

    j1

    XMm1

    Xmi Ym

    j

    " #Ykj

    24

    35

    (15)

    The next states ~Xk

    i and ~Yk

    iare given as

    ~Yk

    i(tpDt) 0 if

    PNx

    j1Wij ^X

    kj

    !, 0

    1 otherwise

    8>:

    9>=>; (16)

    Similarly

    ~Xk

    i(tpDt) 0 if

    PNyj1

    WijYk

    j

    !, 0

    1 otherwise

    8>:

    9>=>; (17)

    The interactive process terminates after p number ofiterations when the estimates of the current states are equal

    to the previous states

    ~Xk

    i(t0pDt) ~Xk

    i(t0(p1) Dt) (18)~Y

    k

    i(t0pDt) ~Yk

    i(t0(p1) Dt) (19)

    5.3 SNR and capacity analysis of aconventional BAM

    Equation (14) can written in the form as

    ~Yk

    i Sgn YkiXNxj1

    Xkj ^Xk

    jXNxj1

    XMm=k

    Ymi Xm

    j ^Xkj

    " # (20)

    Let hx is the Hamming distance between the probe vectors^X

    k and the corresponding stored vector Xk. Therefore (20)can be written in the form as

    ~Yk

    i Sgn (Nx2hx) YkiXN1j1

    XMm=k

    Ymi Xm

    j ^Xkj

    " # (21)

    In (21), the first term is a signal and the second term is anoise[25, 26].

    Next, note that the components of stored vectors arestatistically independent. According to central limittheorem[27, 28], and for large Nand M, the second termin (21) is a noise and consists of a sum of Nx(M2 1)

    independently, and identically distributed (i.i.d) randomvariables each of which is1 or21 with equal probabilityof 1/2, and can be approximated by a Gaussiandistribution with mean zero and variance, s2, which isgiven as

    s2 Nx(M1) (22)

    The SNR is given as

    SNR

    S

    s (Nx2 hx)ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    Nx(M1)p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiNx

    M1r 1 2hx

    Nx (23)

    and forNx,M1, and with hx 0

    SNR~ (Nx2hx)ffiffiffiffiffiffiffiffiffiffiffiffiNxM

    p NxffiffiffiffiffiffiffiffiffiffiffiffiNxM

    p ffiffiffiffiffiffiNxM

    r (24)

    In sequential BAMs, X input produces Yas output, and Yinput produces Xas output, but not both together.

    Therefore conventional BAM is sequential in nature and

    the signal component of the SNR is directly proportionalto the square root of the minimum of the lengths Nx andNyof vectors in sets X and Y. Therefore let Nmin(Nx,Figure 1 Structure of a conventional BAM

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 87

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    8/22

    Ny) this gives

    SNRffiffiffiffiffiN

    pffiffiffiffiffiM

    p (25)

    Note that by doubling the lengthL Nx

    Ny 2N, and asa result, the SNR increases by a factor of

    ffiffiffi2p . Consequently,

    the performance of the sequential BAMs, commonlyproposed[7, 8, 11, 10, 17, 24], falls far short of Hopfieldscondition, and therefore the quality of performance oftraditional BAMs is degraded by a factor of about

    ffiffiffi2

    p as

    compared to Hopfield-type memory. Thereby, forimproved performance, the continuity restriction iscommonly proposed.

    Next, the analysis of storage capacity ofMbinary vectors asa function of their length, Nbits, which is the minimum ofNxand Ny, is carried out using the same approach as given

    in[25]. The storage capacity ofMvectors as a function oftheirN-bit length is given as

    M N2logN

    (26)

    All logs are natural logs.

    5.4 Stability analysis of the traditionalsequential BAM

    The retrieval process in a traditional sequential BAMs of

    outer-product type only includes the coupled associativitythat forms the two-way sequential search. For correct recall,every stored memory pattern pair (Xm, Ym) for m 1,. . . ,M, should form a local minima on the energysurface. Therefore it must be shown that every storedmemory pattern pair is asymptotically stable. The BAM is asimple variant of Hopfield model, and therefore its memorymatrix can be represented as a symmetric memory matrix,and to analyse the stability characteristics, an energy functionis constructed using the quadratic form approach.

    Let each vector pair (Xm, Ym) is concatenated in seriestogether to form a set of (Nx

    Ny) L bits long

    compound vectors Zm that is defined as

    Zm Xm Ym Xm... Ym

    , m1, . . . ,M (27)

    where is a concatenation operator.

    Using the outer-product approach, the memory matrix ofsizeLL is constructed as

    T

    XM

    m1

    Zm

    Zm,t (28)

    In order to analyse the various characteristics of the BAM, its

    expanded version is obtained as

    T X

    Y

    264

    375

    Xt Yt

    X Xt

    X Yt

    Y X

    t

    Y Y

    t

    " # (29a)

    T TNx,Nx WNx,NyWtNy,Nx TNy,Ny

    " # (29b)

    where

    Xm (Xm1, . . . ,Xmj , . . . ,XmNx)Y

    m (Ym1 , . . . ,Ymi , . . . ,YmNy)

    Zm Xm... Ym

    Zm1, . . . ,Zmi , . . . ,ZmL

    The energy function of memory matrix [T] given by (29a)may be defined as

    E(Z)E X Z 12

    Xt.

    .

    .Y

    t

    XX

    t

    XYt

    YX

    t

    YYt

    " # X

    Y

    264

    375

    (30)

    where E X Y E X, Y .

    In BAM the interconnections of self-sub-matricesA

    [X X

    t

    ] and B

    [Y Y

    t

    ], which constitute intrafieldconnections, are ignored, and as a result, it loses itsparallelism property as well as its SNR, storage capacity,performance and reliability characteristics erodes and itsfunctionality reduces to a conventional sequential BAMthat is always constrained by the continuity requirement.

    Therefore in (30) replacing the self-associative memorymatrices [X Xt] and [Y Yt] with null matrices [0], and let[X Yt] [W] and [Y Xt] [Wt]. The energy function in(30) is given as

    E X, Y 12 Xt...

    Yt

    0 W

    Wt 0

    X Y

    24 35 (31)The energy function in (31) can be written in thedecomposed form as

    E X, Y 12

    XtW Y YtWtX (32)

    Retrieval process in a traditional sequential BAM is initiatedwhen an imperfect probe vector ^X, is given as input and theestimate of ~Yis produced at the output as

    ~Y Sgn W ^Xh i

    SgnXMm1

    Ym

    Xm,t

    " #^X

    " # (33)

    88 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    9/22

    The output estimate of theith bit, ~Yi, is given as

    ~YiSgnXMm1

    YmiXNxj1

    Xm,tj^Xj

    " #" # (34)

    Similarly

    ~XiSgnXMm1

    XmiXNyj1

    Ym,tj Yj

    24

    35

    24

    35 (35)

    For these recall conditions, the energy function from (32) canbe written as[6, 23]

    E X, Y 12

    XtXM

    m1X

    mY

    m

    " #Y Yt

    XMm1

    Ym

    Xm

    " #X

    " #

    (36)

    or

    E X, Y 12

    "X

    t

    "XMm1

    Xm

    "XNyj1

    Ym,tj Yj

    ##

    Yt"XM

    m1Y

    m

    "XNxj1

    Xm,tj Xj

    ### (37)

    Let

    amXNxj1

    Xm,tj Xj and bmXNyj1

    Ym,tj Yj (38)

    amand bm can be perceived as weighting functions, and arecomputed as the inner product of a probe vector ^XorYwiththe respective stored vector Xm or Ym.

    The conventional BAM functions as a sequential networkand its sequential operation is shown inFig. 1.

    Assume thatE X0, Y is the energy of the next state inwhich Y stays the same as in the previous state. For therecall process of pair (X, Y) the change in energy, DEx, isgiven as

    DExE X0 E X (39)

    Using (37) and ignoring the factor 1/2

    DEx X0XM

    m1X

    mXNyj1

    Ymj Yj

    24

    35

    24

    35 X XM

    m1X

    mXNyj1

    Ymj Yj

    24

    35

    24

    35

    X0X XMm1

    Xm

    XNyj1

    Ymj Yj24 3524 3524 35 (40)

    It follows from (34) and (35) that

    XMm1

    Xm

    XNyj1

    Ymj Yj

    24

    35X0XM

    m1X

    mXNyj1

    Ymj Y

    24

    35

    (41)

    where the quantity enclosed in vertical bars, j j, designatesthe absolute value, and is, therefore, positive.

    Let

    lj jXMm1

    Xm

    XNyj1

    Ymj Yj

    24

    35

    (42)

    Therefore DExcan be written as

    DEx

    l

    j jX

    0 X

    0

    X (43)

    Now, the examination ofith bit in (43) shows that

    (i) If X0iXi

    . 0, then X0i 1 and Xi 1; it givesDEx, 0.

    (ii) If X0iXi

    , 0, then X0i 1 and Xi1; thisimplies thatDEx, 0.

    (iii) If X0iXi 0, then DEx0.

    Therefore it follows from conditions (i) (iii) thatDEx

    0.

    Similarly, for (Ymi Yi), it can be shown that DEy isdecreasing, that is, DE X, Y 0.

    The difference between energies of the current stateE X, Y and the next state E X0, Y is negative. SinceE X, Y is bounded by 0E(X, Y)M(NyNx) for all Xand Y, the sequential BAM converges to a stable point

    where the energy forms a local minimum.

    6 Structure and implementation

    requirements of the IBAMIn the proposed direct storage sequential bidirectionalmemory model IBAM, the vector pairs (Xm, Ym),m 1,. . .,M, are directly stored in the form of two(MNx) and (MNy) memory matrices A and B,respectively. This direct storage arrangement is actuallyfunctionally equivalent to the outer-product format, butactual outer-product operation is not performed. Theunique characteristics of direct storage model permit theaddition or deletion of vectors in both sets X and Y

    without affecting the other stored vectors. As shown inFig. 2, the lengths Nx and Nyof stored vectors in sets X

    and Y can be increased or decreased. Therefore thismemory model is modular and expandable. It has flexiblestructure and is capable of implementing from any to any

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 89

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    10/22

    directional mapping; this paper considers only thebidirectional mapping.

    6.1 Storage prescription

    Let Xand Ybe two associated sets ofMbipolar vectors inwhich each pair (Xm, Ym) forms an associated pair ofvectors for all m 1, 2, . . ., M, and the lengths of vectorsin sets Xand Yare Nxand Nybits long, respectively. TheseNxand Nybits longMbipolar vectors are directly stored inmemory matrices Aand Bt, which are constructed as follows

    A[X] X1 . . . Xm . . . XM (44)B[Y] Y1 . . . Ym . . . YM (45)

    and

    Xm Xm1 Xm2 Xmj XmNx

    h i, i1, 2, . . . ,Nx

    j1, 2, . . . ,NyY

    m Ym1 Ym2 Ymj YmNyh i

    , m1, 2, . . . ,M(46)

    One or both of the vectors forming an associated pair (Xm

    ,Ym) can invoked by using any one of ^X or Y or both

    imperfect probe vectors.

    6.2 Retrieval process

    Let ^Xk and Yk be the initial probe vectors closest to (Xk, Yk)pair of stored vectors. The estimate of the ith bit, ~X

    k

    i of Xk

    the desired stored vector is obtained, using (44)(46), as

    ~Xk

    i Sgn Ai Btj Ykjh ih i

    (47)

    or

    ~Xk

    i SgnXMm1

    XmiXNyj1

    Ymj Yk

    j

    2

    4

    3

    5

    2

    4

    3

    5 (48)

    where t represents the matrix transpose, and m 1, . . .,M.

    Similarly, the estimate ofith bit, ~Yk

    i, of the desired vectorY

    k, may be obtained as

    ~Yk

    i Sgn Bi Atj ^Xkjh ih i

    (49)

    or

    ~Yk

    i Sgn XM

    m1

    Ymi XNx

    j1

    Xmj ^Xk

    j" #" # (50)The probe vectors ^X and Yare given as input to memory

    Figure 2 IBAM is modular and expandable

    90 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    11/22

    matrices [A] and [B] that as such constitute the preprocessingblocksAxand Ayin which the weighting coefficients amandbmare computed from (48) and (50) as

    am XNx

    j1 Xm

    j ^XjNx2 h

    m

    (51)

    bmXNyj1

    Ymj YjNy2 hm and m1, . . . ,M (52)

    Using (52) and (53), (49) and (51) can be written as

    XkiSgnXMm1

    Xmi bm

    " # (53)

    ~Yk

    i SgnXMm1

    Ymi am

    " # (54)

    The weighting coefficients amand bmso computed becomethe input to the second set of memory matrices A and B,

    which are designated as association selector blocks Bx andBy to select the output vectors ~X and ~Y, respectively. Theimplementation structure forAxand Bxblocks is shown inFigs. 2and 3.

    Therefore the evolution equations of the IBAM can bewritten as

    ~

    Y

    k

    i

    1 if

    PM

    m1Ymi am . 0

    1 ifPMm1

    Ymi am0

    8>>>>>: (55)

    ~Xk

    i1 if

    PMm1

    Xmi bm . 0

    1 ifPMm1

    Xmi bm0

    8>>>>>:

    (56)

    This is the direct storage model of the sequential bidirectionalmemory (IBAM).

    6.3 Stability analysis

    The functionality of the intraconnected direct storagebidirectional memory is based on the non-linear feedbacknetwork. For correct recall, every stored memorypattern pair (Xm, Ym) for m1, . . . ,M, should form alocal minima on the energy surface. It must be shown thatevery stored memory pattern pair is asymptoticallystable. For a sequential model the next state of the patternpair (X, Y), after the first iteration, is defined from (48)

    Figure 3 Implementation structure of IBAM

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 91

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    12/22

    and (50) as

    ~X SgnXMm1

    Xm f

    XNyj1

    Ymj Yj

    0

    @

    1

    A

    0

    @

    1

    A

    2

    4

    3

    5 (57)

    ~Y SgnXMm1

    Ym g

    XNxj1

    Xmj Xj

    ! !" # (58)

    For these recall conditions, a possible energy function may bedefined as[23]

    E(X, Y) XMm1

    Xm f

    XNyj1

    Ymj Yj

    0@

    1A

    0@

    1AX

    XM

    m1Ym g

    XNxj1

    Xmj Xj ! !

    Y (59)

    wheref(.) andg(.) are weighting functions and are computedas the inner product of a probe vector X or Y with therespectivemth stored vector Xm orYm.

    Assume thatE(X0, Y) is the energy of the next state inwhich Y stays the same as in the previous state. For therecall process of pair (X, Y), the change in energy, DEx, isgiven as

    DExE(X0)E(X) (60)

    or

    DEx XMm1

    Xm f

    XNyj1

    Ymj Yj

    0@

    1A

    0@

    1AX0

    XMm1

    Xm f

    XNyj1

    Ymj Yj

    0

    @

    1

    A

    0

    @

    1

    AX

    XMm1

    Xm f

    XNyj1

    Ymj Yj

    0@

    1A

    0@

    1A X0X

    24

    35 (61)

    It follows from (57) and (58) that

    XMm1

    Xm f

    XNyj1

    Ymj Y

    0@

    1A

    0@

    1AX0XM

    m1X

    m fXNyj1

    Ymj Yj

    0@

    1A

    0@

    1A

    (62)

    where the quantity enclosed in vertical bars designates theabsolute value and is represented by Z As.

    Let

    Zj j XMm1

    Xm f

    XNyj1

    Ym

    Y

    0@

    1A

    0@

    1A

    (63)

    Therefore DExcan be written as

    DEx Zj jX0i X0iXi

    (64)

    Now, the examination of (64) shows that

    (i) If X0iXi

    . 0, then X0i 1 and Xi 1; this givesDEx, 0.

    (ii) If X0iXi

    , 0, then X0i 1 and Xi1; thisimplies thatDEx, 0.

    (iii) If X0iXi 0, then DEx0.Therefore it follows from conditions (i) (iii) thatDEx 0.

    Similarly, for (Ymi Yi), it can be shown that DEy isdecreasing, that is, DE X, Y 0.

    The difference between energies of the current stateE X, Y and the next state E X0, Y is negative. SinceE X, Y is bounded by 0E(X, Y)M(NyNx) for allXand Y, the IBAM converges to a stable point where theenergy forms a local minimum.

    6.4 Analysis of SNR and the storagecapacity

    The analysis and estimate of capacity of the IBAM is carriedout using SNR[23, 25, 26]. The amount of noise or thenumber of errors in a binary probe vector ^X may bemeasured, in terms of Hamming distance, as the number ofbit values in those bit positions that do not match to thebit values in the corresponding bit positions in a stored

    vector. It follows from (50) that

    ~Yki SgnXMm1

    YmiXNxj1

    Xmj ^Xj !" #

    (65)

    Assuming that the probe vector ^Xis closest to Xk that is oneof the stored vector. Then (50) can be written as

    ~Yk

    i Sgn YkiXNxj1

    Xkj ^Xj

    !XMm1m=k

    YmiXNxj1

    Xmj ^Xj

    !264375 (66)

    It is assumed that each bit in any ofM N-bit long vectors is

    an outcome of a Bernoulli trial of being 21 or1.Equivalently, all of M N-bit bipolar binary vectors arechosen randomly.

    92 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    13/22

    Using (54) and (66) can be written as

    ~Yk

    i SgnXmm1

    Ymi am

    " #Sgn Yki ak

    XMm1m=k

    Ymi am

    264

    375

    (67)

    The first term in (67), Yki is theith bit of the stored vectorYk,

    and can be either1 or 21. Without loss of generality,assume thatYki 1. The first term in (66) and (67) is asignal and the second term is a noise. These are written as

    SkZm Yki akXMm1m=k

    Ymi am

    264

    375 (68)

    The signal power is given as

    signal powerSk(ak) Yki Nx2 hk

    (69)

    where hk h is the Hamming distance between the probevector, ^X, and Xk, the desired stored vector that is closestin terms of Hamming distance, to the probe vector, ^X.

    In general, the second term in (69) is perceived as noiseand consists of a sum of (M2 1) i.i.d. random variables.

    Next, since the probe vector ^X is closest to the storedmemory vector Xk, all other (M2 1) stored vectors must,at least, have one bit greater Hamming distance, h

    1 bits,

    from the probe vector ^X.

    Note that the (M2 1) vectors are N-bits long, and theyform 2N possible combinations. One such combination[27,28]is given as

    Ym Ym1 Ym2 Ymi1YniYmi1 YmN

    (70)

    Therefore the probability of selecting one of these possiblememory vectors is

    Prob Ym 12

    N

    (71)

    As the N-bits long binary vectors are stored in bipolar form,any probe vector that is less thanN/2 bits away[1, 18, 26], interms of Hamming distance, from the desired stored vectorcan be used for retrieval.

    The mean and variance of the second term is (M2 1)times the mean and variance of a single variable. Theserandom variables are defined as

    viYmi am, m= k andm1, . . . ,k1,k1, . . . ,M (72)

    Let

    a1a2 aM a (73)

    For (M2 1) vectors, it follows from (51) that

    a N2 h1 (74)

    Therefore

    v1Y1i av2Y2i a

    .

    .

    .

    vM YMi a

    (75)

    Since all random variables have the same characteristics,therefore it is sufficient to compute the mean and varianceof a single variable, v1. Next, the probability distribution ofa single random variablev1can be computed as

    Prob v1 1(N2(h1)) 1

    2

    N N1h

    Prob v1 1(N2(h1)) 1

    2

    N N1h

    h0, 1, . . . ,N1

    where h is the Hamming distance between Xm and ^X.Since the random variable v1 can assume values as1 or21 with equal probability, the mean value ofv1 is zero, orE(v1)0

    The variance ofv1 is computed as

    E v21 2 XN1

    h0N2(h1) 2 1

    2

    NN1

    h

    LetN0N1, and E v2

    1

    can be written as

    E v21 2 XN1

    h0N01 2h 2 1

    2

    N N0h

    E v21 2 1

    2

    N"XN0h0

    N012 2 h 2

    2 2 h N01 N0h

    #(76)

    The expression in (76) is evaluated[27, 28]as

    (a)PN0

    h0N01 2 N0

    h

    N01 2 2N0

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 93

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    14/22

    (b)XN0h0

    4 h2 N0

    h

    4 N0 N01 2N02h i

    N0 N01 2N0

    (c) 4 N01 XN0

    h0h

    N0

    h

    N01 22 N02N01

    2N0 N01 2N0Substituting (a)(c) into (76), the result is obtained as

    Ev21 2

    1

    2

    N0N012 N0N012N0N012N0

    or

    E v21 2 1

    2

    NN01 N0 2N02N0h i

    2 12

    NN01 2N0h i

    or

    E v21 2 1

    2 N

    N01 2N0 2 1

    2 N

    N2N1

    Therefore the mean and variance of the second term is(M2 1) times the mean and variance of a single variable

    Zm M1 N (77)

    Therefore the capacity of IBAM in terms of SNR is given as

    SNR signal powerffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffinoise power or noise variance

    p (78)or

    SNR N2h ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 12 N

    2N1N M1 q N2h ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    M1 Np N2h ffiffiffiffiffiffiffiffiffi

    MNp

    (79)

    From (79), it follows that for a Hamming distance ofhN=2, the SNR and storage capacity reduces to zeroand it is maximum when h 0.

    With the Hamming distance ofh 0, the SNR from (79)may be approximated as

    SNRSs

    (N2h)ffiffiffiffiffiffiffiffiffiMN

    p ffiffiffiffiffi

    N

    M

    r (80)

    LetM0 be the number of stored vectors if the probe vectorhas no error, then h 0, and the SNR is given as

    SNR N2

    M01 N N

    M0 (81)

    Therefore to maintain the same recall capability when hnumber of errors is present in the probe vector, the SNR in(79) and (81) must be equal. Therefore

    N

    M0 N2h

    2

    MN

    M 12hN

    2M0

    (82)

    whereM0 is the number of stored vectors when probe vectorhas no error.

    Equation (81) shows that the storage capacity,M, reducesas a square function of the number of errors h , and the effectofh reduces as inverse function of the length Nbits of thestored vectors.

    Next, the analysis of storage capacity ofMbinary vectors asa function of their length,Nbits, can be estimated using theapproach given in[25]. Therefore the capacityMis given as

    M N2logN (83)

    Also, because of Hopfield[1, 18], the Mstored vectors arerelated to theirN-bit length as

    MN7 MN

    6 0:15N (84)

    Note that the storage capacity increases if the lengthNbit ofthe stored vectors is increased.

    7 Implementation andcomputational comparison ofIBAM, BAM, and HAM

    The implementation, functional and computationalcomparison of the proposed model of a direct storagebidirectional memory, IBAM, is carried out and it is shownthat the proposed IBAM has far less implementation andcomputational requirements, far superior performance andis functionally equivalent to the sequential, intraconnected

    and other models of BAMs of outer product, and theconcatenated memory of Hopfield type, HAM. Thesuperiority of IBAM is demonstrated by means of examples.

    94 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    15/22

    7.1 Functional and structural equivalenceof IBAM, IAM with BAM and HAM

    It is shown that it is functionally equivalent to a broad class oftraditional sequential bidirectional memories of outer-product type.

    It follows from (48) and (50) that

    ~Xk

    i SgnXMm1

    XmiXNyj1

    Ymj Yk

    j

    24

    35

    24

    35 (85)

    Equation (85) can be rewritten as

    ~Xk

    i SgnXNy

    j1 XM

    m1Xmi Y

    mj

    " #Ykj

    2

    4

    3

    5 (86)

    or

    ~Xk

    i SgnXNyj1

    Wtij

    h iYkj

    24

    35 (87)

    where

    Wt

    XM

    m1X

    mY

    m (88)

    In an auto-associative memory of Hopfield type, a set Xconsisting of N-bit long M bipolar binary vectors arestored. The Hopfield-type associative memory, HAM, isconstructed by performing the outer-product operation onM bipolar binary vectors with themselves. Thereforereplacing the M vectors in set Ywith M vectors in set X,from (85), the memory matrix, T, of HAM can be written as

    ~Xk

    i SgnXMm1

    XmiXNxj1

    Xmj ^Xk

    j

    " #" # (89)

    Following the same steps as those for BAM, (89) can bewritten as

    ~Xk

    i SgnXNxj1

    XMm1

    Xmi Xm

    j

    " #^Xkj

    " # (90)

    Equation (90) can be written as

    ~Xk

    i Sgn XNy

    j

    1

    Ttijh i^Xkj

    24

    35 (91)

    where Tis the memory matrix of Hopfield type.

    Therefore the direct storage memory, improved auto-associate memory (IAM), given in (89), is equivalent to theHopfield-type memory, HAM, given in (91).

    If in the analysis of IBAM and BAM, the Yset of vectorsare replaced with the X set of vectors, then the analysis

    for IAM and HAM is the same as that for the IBAM andBAM.

    Note that the memory matrix [W] and [Wt] given in(87) and (88) for IBAM are the same as that given by (8)and (9), and these constitute the direct storage IBAM,

    which is functionally equivalent, and possesses all theattributes of that of the traditional sequentialbidirectional memory of outer-product type commonlyreported in the literature.

    The SNR and storage capacity of IBAM as given by

    (80) and (83) are equal to that of BAM given by (25) and(26).

    7.2 Comparison of the convergenceprocess

    In direct storage BAM the energyE(X, Y) is bounded by 0E(X, Y)M(NyNx), and the energy in outer-product-type BAM is bounded by 0E(X, Y)2M(NyNx).

    Clearly, the energy level in outer-product-type memoriesis much higher than the energy level in direct storage-type

    memories. Therefore the lower energy levels will stabiliserelatively at a faster rate than the higher energy levels.Consequently, the direct storage BAM, considered inthis paper, will converge faster than the memories ofouter-product type. Therefore stability characteristics ofIBAM are to BAM or HAM (Hopfield-type associativememory).

    7.3 Implementation and computationalrequirements of a conventional BAM

    The construction of a traditional BAM of outer-product type

    requires two (NxNy) memory matrices, W and Wt

    , thatconsist of 2(NxNy) orO(N

    2) interconnections with weightstrength ranging between +M, each interconnectionneeds log2M 1 bits of memory storage, and M is thenumber of stored vectors.

    In each complete cycle of recall, both the matrixWand itstranspose, Wt, are used in each iteration, therefore, theconstruction of matrix W for M stored bipolar vectors,requires (M2 1) NxNyadditions, M NxNymultiplicationsand 2Nx Ny interconnections. In each iteration when a Y

    k

    probe vector is used, the estimate, ~Xk, of the corresponding

    stored vector Xk is obtained, and is given in (92). Similarly,

    when the input probe vector ^Xk is used, the estimate, ~Yk,

    of the corresponding stored vector Yk is obtained at the

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 95

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    16/22

    output. This completes one complete cycle

    ~Xk

    1

    ~Xk

    2

    .

    .

    .

    ~XkNx

    2666664

    3777775

    Sgn

    W11 W12 W1NyW21 W22 W2Ny

    .

    .

    ....

    WNx1 WNX2 WNxNy

    2666664

    3777775

    Yk1

    Yk2

    .

    .

    .

    YkNy

    266664

    377775

    2666664

    3777775

    (92)

    This operation requires Nx Ny multiplications andNx(Ny2 1) additions. Now assume that to achieve stability,Pnumber of iterations is required. Then, a complete cyclerequires 2PNxNy multiplications and 2PNx(Ny2 1)additions. Thus the conventional BAM models require(M2P)NxNy multiplications and (M2P2 1)NxNy2 2PNxadditions, and 2NxNyinterconnections.

    7.4 Implementation and computationalrequirements of an IBAM

    The construction of direct storage sequential BAM (IBAM)requires 2M(NxNy) that is O(N) or about 30% ofinterconnections with weight strength ranging between +1,and each interconnection needs 2 bits of memory storage.

    It is computationally very efficient as compared tosequential, intraconnected and other models of BAMs ofouter-product type.

    It has simpler, modular and expandable structure, and is

    implementable in VLSI, optical, nanotechnology andsoftware. The implementation structure of IBAM is showninFigs. 2and 3.

    The proposed direct storage model is an improvedbidirectional memory that requires the computation of Mnumber of a and M number of b coefficients. The acoefficients require MNx multiplications and M(Nx2 1)additions, and bcoefficients requireM Nymultiplications andM(Ny2 1) additions. So, forPnumber of iterations requiredto achieve stability, a complete cycle requires 2PM(NyNx)multiplications and 2PM(NxNy2 2) additions.

    The concatenated bidirectional memory of Hopfield typewill requireP(NxNy)2.

    7.5 Examples: comparison ofimplementation and computationalperformance of IBAM, IAM, BAM and HAM

    Let M 15, P 20, Nx 60 bits and Ny 40 bits. Theinterconnection requirements, range of weight strengths and

    multiplication operations of these models are computed.Clearly it demonstrates the superiority of performance andthe simplicity of its structural implementation of the IBAMproposed in this paper, and it is shown in Table 1.

    7.5.1 Example 1: comparative performance ofIBAM and BAM: Consider, as an example of a directstorage bidirectional memory that stores the associated sets

    Xand Yconsisting of four pairs of bipolar vectors given as(see (93a and b))

    Using the ^X probe vector that is closets, in terms of

    Hamming distance, to the stored vector X1

    that form apair (X1, Y1) is given as

    ^X 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0(94)

    The alpha coefficients are computed as

    a14a20a32

    a40^X1!a1! Y1

    Y1 !b1! X1

    X1 !a2! Y1

    Clearly, the vectors X1 and Y1 that form an associated pairare obtained, in about three iterations, as

    X1

    Y1

    1 0 1 0 1 0 1 0 1 0 1 0 1 0 11 1 1 1 0 0 0 0 1 1

    In order to construct the sequential BAM, W, of outer-product type, the outer-product operation is performed on

    X X

    1

    X2

    X3

    X4

    2664

    3775

    1111

    1111

    1111

    1111

    11

    11

    11

    11

    111

    1

    111

    1

    1111

    11

    11

    1111

    1111

    111

    1

    111

    1

    111

    1

    2664

    3775 (93a)

    Y

    Y1

    Y2

    Y3

    Y4

    26664

    37775

    1 1 1 1 1 1 1 1 1 11 1 1

    1

    1

    1 1 1 1

    1

    1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1

    26664

    37775 (93b)

    96 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    17/22

    the vectors in sets X and Y. The memory matrix W thusobtained is given in (95)

    4 2 2 2 0 2 0 2 4 02 0 0 4 2 0 2 0 2 22 0 0 0 2 0

    2

    4 2 2

    2 4 0 0 2 0 2 0 2 20 2 2 2 4 2 0 2 0 0

    2 0 0 0 2 0 2 4 2 20 2 2 2 0 2 4 2 0 4

    2 0 4 0 2 4 2 0 2 24 2 2 2 0 2 0 2 4 00 2 2 2 0 2 4 2 0 40 2 2 2 0 2 0 2 0 0

    2 4 0 0 2 0 2 0 2 22 4 0 0 2 0 2 0 2 20 0 2 2 0 2 0 2 0 00 0

    2 2 0 2

    4

    2 0 4

    26666666666666666666666664

    3777777777777777777777777595

    Using the same probe vector given in (94), the retrievalsequence is given as

    ^X! Y1 ! X1 ! Y1

    So in about three iterations, the correct stored pair (X1, Y1) isobtained.

    For the case of HAM, the two sets of vectors Xand Yare firstconcatenated together and then the outer product operation isperformed to obtain the memory matrix, T. It is assume thatit will take about three iterationssame as needed for the BAM.

    The comparative implementation and computational

    requirements have been tabulated inTable 1.

    7.5.2 Example 2: comparative performance ofIBAM, IAM and HAM: In order to demonstrate theimplementation and computational requirements for thestorage and retrieval of information in a completelyconnected auto-associative memory, consider a set ofM 4 bipolar vectors [X1 X2 X3 X4], each 20 bit long(N 20) given as[19](see (96))

    The corresponding memory matrix T is computed as(see (97))

    Consider a probe vector ^Xk given as

    ^Xk [0 1 1 1 0 1 1 0 0 0 0 0 0 00 0 0 0 0 0] (98)

    The initiation of retrieval process starts when a probe vector^X

    k is applied as input to the memory matrix T.

    The iterative retrieval process using the ^Xk in (98) gives theresult after first iteration as

    X1

    X2

    X3

    X4

    2664 3775 1

    1 1 1

    1 1 1 1

    1 1 1 1

    1 1

    1

    1 1

    1

    1 1

    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

    2664 3775(96)

    T

    4 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 02 4 2 0 0 0 2 0 2 2 2 2 2 2 4 0 0 2 2 20 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 0

    2 0 2 4 0 0 2 0 2 2 2 2 2 2 0 4 0 2 2 22 0 2 0 4 4 2 0 2 2 2 2 2 2 0 0 4 2 2 22 0 2 0 4 4 2 0 2 2 2 2 2 2 0 0 4 2 2 20 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 02 0 2 0 0 0 2 4 2 2 2 2 2 2 0 0 0 2 2 20 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 44 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 00 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 04 2 0 2 2 2 0 2 0 4 0 4 0 0 2 2 2 0 4 00 2 4 2 2 2 4 2 0 0 4 0 4 0 2 2 2 0 0 00 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4

    2 4 2 0 0 0 2 0 2 2 2 2 2 2 4 0 0 2 2 22 0 2 4 0 0 2 0 2 2 2 2 2 2 0 4 0 2 2 22 0 2 0 4 4 2 0 2 2 2 2 2 2 0 0 4 2 2 20 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4

    4 2 0 2 2

    2 0

    2 0

    4 0

    4 0 0 2

    2

    2 0 4 0

    0 2 0 2 2 2 0 2 4 0 0 0 0 4 2 2 2 4 0 4

    266666666666666666666666666666666664

    377777777777777777777777777777777775(97)

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 97

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    18/22

    First iteration

    [2 4 6 48 4 64 22 102 102 08 8 2 22]

    After thresholding

    [ 0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0 ]

    Second iteration

    [

    4 4 12 4

    16 12 12

    12 8

    4 12

    4

    16

    12 0 8 12 8 012]

    After thresholding

    [ 0 1 1 1 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0 ]

    Stable state is reached as

    [ 0 1 1 1 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0 ]

    This is one of the stored vectors and is obtained in threeiterations.

    Note that the memory matrix Tis a fully connected matrix.It is 2020, has 400 number of interconnections and the

    strength of each interconnection ranges between 2M andM, whereMis number of stored vectors.

    For IAM, the set of four bipolar binary vectors arranged inthe form of a (MNx) or (420) memory matrix, andthen using the ^Xprobe vector given in (98), that is closest,in terms of Hamming distance to the stored vector X4,and in two iterations the vector X4 is correctly retrieved.

    The respective performances comparison is tabulated inTable 1.

    7.5.3 Examples 3: comparison of implementation

    and computational performance of IBAM, IAM,BAM and HAM: LetM 15,P 20,Nx 60 bits andNy 40 bits. The interconnection requirements, range ofweight strengths and multiplication operations of thesemodels are computed. Clearly, it demonstrates thesuperiority of performance and the simplicity of itsstructural implementation of the IBAM, proposed in thispaper, and it is shown inTable 1.

    7.6 Capacity analysis

    The traditional BAM is of outer-product type and requires

    O(N2

    ) interconnections. It requires (2NxNy) or withNxNy, 2N2 interconnections, and stores M(NxNy) or

    2MN information bits. Therefore the number of

    Table 1 Comparison of different BAMs

    Type of BAMs Interconnections Range of

    weights

    Memory requirements Iteration Multiply operations

    Bits/Intc. Totalbits

    Example 1

    IBAM M(NxNy) 100 +1 2 bits 200 3 PM(NxNy) 30 000BAMS 2NxNy 300 +M +4 log2M1 3

    bits

    900 3 2P(NxNy) 96 000

    PBAMs of

    Hopfield type

    (NxNy)2 625 +M +4 log2M1 3bits

    1875 3 P(NxNy)2 200 000

    Example 2

    IBAM M(NxNy) 160 +1 2 bits 320 2 PM(NxNy) 30 000BAMS 2NxNy 400 +M +4 log2M1 3

    bits

    1200 3 2P(NxNy) 96 000

    PBAMs of

    Hopfield type

    (NxNy)2 400 +M +4 log2M1 3bits

    1200 3 P(NxNy)2 200 000

    Example 3

    IBAM M(NxNy) 1500 +1 2 bits 3000 20 PM(NxNy) 30 000BAMS 2NxNy 4800 +M +15 log2M1 5

    bits

    24 800 20 2P(NxNy) 96 000

    PBAMs of

    Hopfield type

    (NxNy)2 10 000 +M +15 log2M1 5bits

    50 000 20 P(NxNy)2 200 000

    98 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    19/22

    interconnections required to store each bit is obtained as

    INTbtN

    M 2 loge N (99)

    Similarly, proposed IBAM directly stores the informationvectors and requires O(N) interconnections. It requiresM(NxNy) or 2MN interconnections and storesM(NxNy) or 2MN information bits. Therefore numberof interconnections required to store one bit is obtained as

    INTbd2 MN

    2MN 1 intconnections per bit

    As shown inFig. 4, the increase in storage capacity is given as

    Cinc2 loge N1 (100)

    As shown inFig. 5, the percentage increase due to the use ofIBAM is obtained as

    hc (loge N1)

    loge N

    100% (101)

    7.7 Interconnection storage requirements

    The traditional BAM with NNxNy, requires 2N2

    interconnections with weight strength ranging between+M. Therefore each interconnection requires log2M bitsfor the magnitude plus one bit for the sign. The totalmemory storage requirements is given as

    STBAM2N2 log2M1

    bits (102)

    Similarly, the IBAM requires M(NxNy) o r 2MN ifNxNyN, with weight strength ranging between +1.Therefore each interconnection requires 1 bit for themagnitude and 1 bit for the sign. The memory storage

    requirement in bits is given as

    STIBAM2 2MN bits (103)

    As a result, net saving in memory storage is given as

    STsave2N N log2M1 2M (104)

    The percentage saving because of the use of IBAM isobtained as

    hST N log2 M1 2MN log2 M1 !100 (105)

    8 Simulation and analysis ofcomparative performance

    The number of binary vectors stored in neural associativememories is rather small. The M stored vectors are relatedto their length ofNbits by (83) as

    M N2 logN

    (106)

    Also, because of Hopfield[1, 18], the relation ofMstoredvectors with theirN-bit length is given as N (67)M.Therefore

    MN7 MN

    6 0:15N (107)

    The implementation and computational complexity of these

    memory models are compared in terms of their requirementsof interconnections, the number of add and multiplyoperations needed for retrieval.Figure 4 Increase in capacity

    Figure 5 Percentage increase in efficiency

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 99

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    20/22

    LetCand Ddenote the computational loads of traditionaland direct storage BAMs, respectively, and Pis the numberof iterations required for retrieval.

    For traditional BAMs, the number of interconnections,INTt, the add and multiply operations, Ca and Cm aregiven as

    INTt 2NxNyCa (M 1)NxNy 2PNx(Ny 1) MNxNy 2PNxNy

    NxNy(M 2P)CmMNxNy 2PNxNyNxNy(M 2P)

    Similarly, for direct storage improved memory IBAM, thenumber of interconnections, INTd, the add and multiplyoperations,Daand Dmare given as

    INTd2M NxNy

    Da2PM NxNyz 2PM NxNy Dm2PM(NyNx) 2PM(NxNy)

    Assuming one multiply operation equals to four addoperations, and with NxNyN and M 0:15N, thecomparison of interconnections and the total computationalloads C and D, in terms of multiply operations, areobtained as

    C

    D 5=4 N

    2 M2P 5=4 4PMN

    N

    4P N

    2M (108)

    INTtINTd

    2N2

    4 0:15 N23:33 (109)

    SubstitutingMfrom (106) into (108), one obtains

    C

    D N

    4PlogN (110)

    Also, substitutingMfrom (107), one obtains

    C

    D N

    4P10

    3 (111)

    Equations (26) and (27) are used to predict the simulation

    results of the comparative performance of conventionalBAM and the direct storage improved BAM and arepresented in Table 2, which shows the improvementfactors, for example P 30 and N 400, theimprovement factor of IBAM is 9.32 times when comparedto the traditional BAM.

    Fig. 6shows the improvement in performance of IBAMbecause of using about only 15% of interconnections ascompared to the conventional BAM.

    Table 2 Comparative performance of BAM and IBAM

    P/N 20 30 40 60 80 100 250 400 800 1000 1500

    1 8.00 10.90 13.69 19.09 24.38 29.61 55.30 105.99 206.68 256.91 382.31

    5 4.00 4.90 5.69 7.09 8.38 9.61 15.30 25.99 46.68 56.91 82.31

    10 3.50 4.15 4.69 5.59 6.38 7.11 10.30 15.99 26.68 31.91 44.81

    15 3.33 3.90 4.36 5.09 5.72 6.27 8.63 12.66 20.02 23.57 32.31

    20 3.25 3.78 4.19 4.84 5.38 5.86 7.80 10.99 16.68 19.41 26.06

    30 3.16 3.65 4.02 4.59 5.05 5.44 6.96 9.32 13.35 15.24 19.81

    40 3.12 3.59 3.94 4.47 4.88 5.23 6.55 8.49 11.68 13.16 16.69

    50 3.10 3.55 3.89 4.39 4.78 5.11 6.30 7.99 10.68 11.91 14.81

    80 3.06 3.49 3.81 4.28 4.63 4.92 5.92 7.24 9.18 10.03 12.00

    100 3.05 3.48 3.79 4.24 4.58 4.86 5.80 6.99 8.68 9.41 11.06

    Figure 6 Improvement factor of interconnections

    100 IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81102

    & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-nbt.2009.0002

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    21/22

    9 Conclusions

    This paper presents an efficient and improved model of adirect storage bidirectional memory, IBAM, andemphasises the use of nanotechnology for efficientimplementation of such large-scale neural network

    structures. This model directly stores the X and Yassociated sets of M bipolar binary vectors, requires about30% of interconnections and is computationally veryefficient as compared to sequential, intraconnected andother BAM models of outer-product type. The simulationresults show that it has logeN times higher storagecapacity, superior performance, faster convergence andretrieval time, when compared to traditional sequential andintraconnected bidirectional memories.

    The analysis of SNR and stability of the proposed modelhas been carried out.

    It is shown that it is functionally equivalent to andpossesses all attributes of a BAM of outer-product type,and yet it is simple and robust in structure, VLSI, opticaland nanotechnology realisable, modular and expandableneural network bidirectional associative memory model in

    which the addition or deletion of a pair of vectors does notrequire changes in the strength of interconnections of theentire memory matrix.

    10 References

    [1] HOPFIELD J.J.: Neurons with graded response, have

    computational properties like those of two state neurons,

    Proc. Natl. Acad. Sci. USA, 1984, 81, p. 3080

    [2] TANK D.W., HOPFIELD J.: Simple neural optimization

    network an A/D converter, signal decision circuit, and alinear programming circuit, IEEE Trans. Circuits Syst.,

    1986, CAS-33, (5), pp. 533541

    [3] AMARI S., MAGINU K.: Statistical neurodynamics of

    associative memory,Neural Netw., 1988, 1, (1), pp. 6373

    [4] BHATTI A.A.: Analysis of the effectiveness of usingunipolar and bipolar binaries, and hamming distance in

    neural network computing, J. Opt. Eng., 1992, 31, (9),

    pp. 19721975

    [5] BHATTI A.A.: Analysis of performance issues in neural

    network based associative memory models (IJCNN, San

    Diego, CA, June 1990)

    [6] HWANG J.-D., HSIAO F.-H.: Stability analysis of neural-

    network interconnected systems, IEEE Trans. Neural

    Netw., 2003, 14, (1), pp. 201208

    [7] KOSKO B.: Bi-directional associative memories, IEEE

    Trans. SMC, 1988, 18, pp. 4960

    [8] SIMPSON P.K.: Higher-ordered and interconnected

    bidirectional associate memories, IEEE Trans. SMC, 1990,

    20, (3), pp. 637653

    [9] DENKER J.S.: Highly parallel computation network

    employing a binary-valued T matrix and single output

    amplifiers. United States Patent 4737929, 12 April 1988

    [10] WANG T., ZHUANG X., X I NG X ., XIAO X.: A neuron-

    w eighte d lea rning algorithm a nd its har dwa re

    implementation in associative memories, IEEE Trans.

    Comput., 1993, 42, (5), pp. 636640

    [11] CHRIS L.: Computing with nanotechnology may get a

    boost from neural networks, Nanotechnology, 2007. DOI:

    10.1088/0957-4484/18/36/36502

    [12] CLA RK N.A.: Method for parallel fabrication of

    nanometer scale multi-device structures. United StatesPatent 4802951, 7 February 1989

    [13] N UG EN T A .: Phys ic al neural network des ign

    incorporating nanotechnology. United States Patent

    6889216, 3 May 2005

    [14] NUGENT A.: Training of a physical neural network.

    USPTO Application 20060036559, 16 February 2006

    [15] VOGEL V., B A IR D B . (EDS.): Nanotechnology. Report of

    the National Nanotechnology Initiative Workshop,

    Arlington, VA, USA, 911 October 2003

    [16] DANIELE C., COSTANTINI G., PERFETTI R., RICCI E.: Associative

    memory design using support vector machines, IEEE

    Trans. Neural Netw., 2006, 17, (5), pp. 11651174

    [17] WANG T., ZHUANG X., XING X.: Designing bidirectional

    associative memories with optimal stability, IEEE Trans.

    Syst. Man Cybern., 1994, 24, (5), pp. 778780

    [18] HOPFIELD J.J.: Neural networks and physical systems

    with emergent collective computational abilities, Proc.

    Natl. Acad. Sci. USA, 1982, 79, pp. 24452558

    [19] FARHAT N.: Optoelectronics builds viable neural-net

    memory, Electronics, 1986, pp. 41 44

    [20] HOPFIELD J.J.: Electronic network for collective decision

    based on large number of connections between signals.

    United States Patent 4660166, 21 April 1987

    [21] MOOPENN A.W., ANILKUMAR P. THAKOOR, LAMBE J.J.: Hybrid

    analog digital associative neural network. United States

    Patent 4807168, 21 February 1989

    [22] N UG EN T A .

    : Pa tter n re cognit ion uti lizing ananotechnology-based neural network. United States

    Patent 7107252, 12 September 2006

    IET Nanobiotechnol., 2009, Vol. 3, Iss. 4, pp. 81 102 101

    doi: 10.1049/iet-nbt.2009.0002 & The Institution of Engineering and Technology 2009

    www.ietdl.org

  • 7/27/2019 Performance Analysis and Comparison of A

    22/22

    [23] COTTREL M.: Stability and attractivity in associative

    memory networks, Biol. Cybern., 1988, 58, pp. 129139

    [24] WA NG L .: Heteroassociatve of spatio-temporal

    sequences with the bidirectional associative memory,

    IEEE Trans. Neural Netw., 2000, 11, (6), pp. 15031505

    [25] MCELIECE R.J., POSNER E.C., RODEMICH E.R., VENKATESH S.S.: The

    capacity of the Hopfield associative memory, IEEE Trans.

    Inf. Theory, 1987, 33, (4), pp. 461482

    [26] WANG J.-H., KRILE T.F., WALKUP J.F.: Determination of the

    Hopfield associative memory characteristics using a single

    parameter,Neural Netw., 1990, 3, (3), pp. 319331

    [27] PAPOULIS A., PILLAI S.U.: Probability, random variables and

    stochastic processes (McGraw-Hill, New York, USA, 2002,

    4th edn.)

    [28] R OSS S.: A first course in probability (Pearson

    Education, Inc., London, UK, 2002, 7th edn.)

    www.ietdl.org