the fuzzy associative memory of max-min fuzzy neural network with threshold

11
ELSEVIER Fuzzy Sets and Systems 107 (1999) 147-157 FUZZY sets and systems www.elsevier.com/locate/fss The fuzzy associative memory of max-min fuzzy neural network with threshold 1 Puyin Liu Department of System Engineering and Mathematics, National University of Defence Technology, Changsha, Hunan 410073, People's Republic of China Received August 1996; received in revised form October 1997 Abstract In this paper, we introduce the max-min fuzzy neural network with threshold which generalizes the fuzzy neural network models in [1,4--6, 8, 10] and show that the storage capacity of the two-layer max-min fuzzy neural network is identical with one of the max-min fuzzy neural network with hidden layers. For given fuzzy pattern pairs (X1,111) .... , (Xp, Yp), we obtain an equivalent condition which the family of given fuzzy pattern pairs can be stored in the fuzzy neural network. Finally, we give a example to demonstrate our conclusions. © 1999 Elsevier Science B.V. All rights reserved. Keywords: Max-min fuzzy neural network; Fuzzy associative memory; Threshold; FAM space 1. Introduction The contemporary research on fuzzy neural networks (FNNs) focuses mainly on two classes of models. One is called regular FNNs whose input signals, output signals and connection weights are fuzzy-valued and the internal operations are based on standard fuzzy arithmetic and the extension principle. The studies of regular FNNs focus mainly on learning algorithm and the properties of the networks. For the details of this literature see [2, 3, 9]. Another is called hybrid FNNs in which the most important and fruitful ones are max-min FNNs. Many applications of max-min FNNs to pattern recognition, auto-control and the prediction of the system, etc. are shown in [4, 5, 8]. Among max-min FNNs, the most important one is fuzzy associative memory (FAM). It is of much sig- nificance for FAM network that the storage capacity of the network is improved. Because the hardware and computation requirements that implement a FNN with the high storage capacity can be significantly reduced. So there are a lot of researches about the storage capacity of FAM in recent years. At first, Kosko [10] de- signed a learning algorithm - fuzzy Hebbian rule for fuzzy associative memory. Despite its success in various applications, max-min FNNs with fuzzy Hebbian rule suffer from very low storage capacity. Fuzzy pattern pairs (Xl, }I1).... ,(Xp, Yp) (p> 1) cannot be completely stored in the networks, i.e. one fuzzy pattern per FAM matrix. Thus, a lot of hardwares and computations are usually required to implement the networks. [ Supported by Defence Research Foundation. 0165-0114/99/$ - see front matter (~) 1999 Elsevier Science B.V. All rights reserved. PII: S0165-0114(97)00352-7

Upload: puyin-liu

Post on 02-Jul-2016

218 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: The fuzzy associative memory of max-min fuzzy neural network with threshold

ELSEVIER Fuzzy Sets and Systems 107 (1999) 147-157

FUZZY sets and systems

www.elsevier.com/locate/fss

The fuzzy associative memory of max-min fuzzy neural network with threshold 1

Puyin Liu Department of System Engineering and Mathematics, National University of Defence Technology, Changsha,

Hunan 410073, People's Republic of China

Received August 1996; received in revised form October 1997

Abstract

In this paper, we introduce the max-min fuzzy neural network with threshold which generalizes the fuzzy neural network models in [1,4--6, 8, 10] and show that the storage capacity of the two-layer max-min fuzzy neural network is identical with one of the max-min fuzzy neural network with hidden layers. For given fuzzy pattern pairs (X1,111) .... , (Xp, Yp), we obtain an equivalent condition which the family of given fuzzy pattern pairs can be stored in the fuzzy neural network. Finally, we give a example to demonstrate our conclusions. © 1999 Elsevier Science B.V. All rights reserved.

Keywords: Max-min fuzzy neural network; Fuzzy associative memory; Threshold; FAM space

1. Introduction

The contemporary research on fuzzy neural networks (FNNs) focuses mainly on two classes of models. One is called regular FNNs whose input signals, output signals and connection weights are fuzzy-valued and the internal operations are based on standard fuzzy arithmetic and the extension principle. The studies of regular FNNs focus mainly on learning algorithm and the properties of the networks. For the details of this literature see [2, 3, 9]. Another is called hybrid FNNs in which the most important and fruitful ones are max-min FNNs. Many applications of max-min FNNs to pattern recognition, auto-control and the prediction of the system, etc. are shown in [4, 5, 8].

Among max-min FNNs, the most important one is fuzzy associative memory (FAM). It is of much sig- nificance for FAM network that the storage capacity of the network is improved. Because the hardware and computation requirements that implement a FNN with the high storage capacity can be significantly reduced. So there are a lot of researches about the storage capacity of FAM in recent years. At first, Kosko [10] de- signed a learning algorithm - fuzzy Hebbian rule for fuzzy associative memory. Despite its success in various applications, max-min FNNs with fuzzy Hebbian rule suffer from very low storage capacity. Fuzzy pattern pairs (Xl, }I1) . . . . ,(Xp, Yp) ( p > 1) cannot be completely stored in the networks, i.e. one fuzzy pattern per FAM matrix. Thus, a lot of hardwares and computations are usually required to implement the networks.

[ Supported by Defence Research Foundation.

0165-0114/99/$ - see front matter (~) 1999 Elsevier Science B.V. All rights reserved. PII: S0165-0114(97)00352-7

Page 2: The fuzzy associative memory of max-min fuzzy neural network with threshold

148 P. Liu/Fuzzy Sets and Systems 107 (1999) 147-157

To make up the defects of fuzzy Hebbian rule, Fan improved Kosko's methods in [6] with maximum solution matrix, and gave a equivalent condition that a family of fuzzy pattern pairs can be completely stored in the network. Using Fan's learning algorithm, we can store multiple fuzzy patterns in a max-min FAM matrix, which improves Kosko's learning matrix. Both Blanco [1] and He [8] designed the iteration algorithms for the connection weight matrix of max-min FNN, which shows the self-adaption of the FNN, but the storage capacity of the FNN cannot be higher with their methods than Fan's method. Chung [4] established a theorem for perfect recall of all the stored fuzzy pattems, which also improved Kosko's methods. By introducing max-bounded product (V-Q), Chung generalized max-min FAM. However, the family of fuzzy pattems in Chung's methods is semi-overlapped or V-® orthogonal which are somewhat hard to satisfy in practice.

In this paper, we generalize max-min FNNs by introducing a threshold at each unit of max-min FNNs. The paper is organized as follows. In Section 2, we state Kosko's and Fan's learning algorithms for the connection weight matrices of the max-min FNNs, and prove the identity of the storage capacities of the two-layer max-min FNN and one with hidden layers. In Section 3, we design the learning algorithms about connection weight matrix and threshold vectors. Also, we show the equivalent conditions for fuzzy pattern pairs' complete storage in the max-min FNN with threshold. In Section 4, we give an example to demonstrate our conclusions. By the theorems and examples in the paper, it is shown that our learning algorithm improves the storage capacities of FNNs based on Fan's methods or Kosko's methods and leave out Chung's restrictions to the fuzzy patterns. So our models are more applicable and effective.

2. M a x - m i n F N N architecture

Let fuzzy input signals considered in the paper be in [0, 1] n, and output signals in [0, 1] m, ]An× m be the set of all n × m matrices whose elements are in [0, 1]. From now on, we suppose (Xk, Yk) ( k = 1 . . . . . p) to be fuzzy pattern pairs, Xk = (x~ . . . . . x~) the fuzzy input signal and Y~ = (y~ . . . . . ym k ) the fuzzy output signal. Kosko [10] developed the following FAM model shown in Fig. 1.

Denote W = (wij)C ]An×m to be connection weight matrix. I/O relations are as follows:

n

y j = V ( x i A w i j ) ( j = l . . . . . m). i=1

(1)

With the fuzzy matrices and the fuzzy vectors, we can rewrite (1) as follows:

Y = X o W , (2)

where X = ( x 1 . . . . . Xn) E [0, 1]", Y = (Yl . . . . . Ym) e [0, 1] m, and o stands for max-min (V-A) composition op- eration. For given fuzzy pattern pairs (Xk, Yk) k = 1 . . . . . p, Kosko [10] has designed fuzzy Hebbian rule to

z l /q"x wi j / . ~ , Yl

z~z. i ! ! ,Yi i Y$~

Fig. 1. Two-layer max-min FNN.

Page 3: The fuzzy associative memory of max-min fuzzy neural network with threshold

P. LiulFuzzy Sets and Systems 107 (1999) 147-157 149

obtain learning matrix

P

w= V (xZo :,,). k = l

However, it is obvious that for p > 1, fuzzy pattem pairs (Xk, Yk) (k = 1 . . . . . p) cannot be completely stored in model (2). Recently, Fan [6] improved Kosko's algorithm by defining implication connection Rg between input Xts and output Y~s. We introduce the following notations:

P = { 1 . . . . . p}, N = { 1 . . . . . n}, M---{1 . . . . . m},

Ix i=y:} , I, ij {kePIxf<y~}, , y ) ~ E~/= { k e P =

GEii = Gij U Eij, LEti = Lij U E~/.

By the following learning scheme (3), we obtain the connection weight matrix W0 = (wO).xm:

f Ak~G,, Y~ if Gij # O, W O. [ 1 if Gij = O.

Also we let

s . = {k GE. I w ° },

W(P)={WE#nxmIXkoW=Yk, kEP}.

The following result comes from [6].

(3)

Theorem 1. W(P) ¢ 0 if and only if for each j e M, UiCN Sil ~- P" Moreover, if the condition holds, then Wo = (w °) E W(P).

Definition 1. Let W E lZ,×m and

F(W) = {(X, Y) e [0, 1]" x [0, l] m I y O W = Y}.

We called F(W) to be the FAM space of model (2).

How do we improve max-min FNNs such that as many fuzzy pattern pairs can be stored in the network as possible? The principal way to solve the problem is to enlarge the FAM space of the corresponding FNN. Kosko [10], Fan [5, 6], He [8] and Chung [4] have done much work on this problem. Basing on these achievements, we shall improve the storage capacity of the max-min FNN by introducing a threshold at each unit of the network. At first, we consider a three-layer max-min FNN as shown in Fig. 2. I/O relations of the FNN are as follows:

n . (I)~ Ok = Vi=l (xi AWik 1, (4)

l (2)~ Y/ Vk=l (oh A wk; :,

where j = 1 . . . . . m. By the following theorem, we cannot improve storage capacity of the max-min FNN by increasing the hidden layer of the network.

Page 4: The fuzzy associative memory of max-min fuzzy neural network with threshold

150 P. Liu/Fuzzy Sets and Systems 107 (1999) 147-157

zt ~ . . ! 0 ~ w(2) ~ 11t

z l l / j t t

Z a t 7/m

input layer hidden layer output layer

Fig. 2. Three-layer max-min FNN.

Theorem 2. Let W~ , (1), , (2), = ~wik )nxl, W2 = twkj )lxm, and FI(WI, W2) be the FAM space of the model (4), i.e.,

Fa(WI, W2) = {(X, r ) E [0, 1]" x {0, 1] m I X = (Xl . . . . . Xn), Y = (Yl . . . . . Ym) satisfy (4)}

then the following hold: (1) For given W1, W2, there is a matrix W E Pnxm, such that F1 ( W1, W2 ) C F( W); (2) l f l>~mAn, then for W E#nxm, there are W1 E#nxt, W2E#Ixm, such that F(W)CFI(W1, W2).

Proof. (1) For arbitrary (X,Y)EFI(W1, W2) X=(Xl . . . . . Xn), Y=(Yl . . . . ,Ym) , the following holds by the assumption:

YJ = k=lV i=lV(XiAWi(~)) ' \Wkj ) = k=l i:1 (X iAWik AWkj )

= A x, A A / \ Wik ~ . i=1 k=l i=1

where j EM. We let W = (wij)n×m , where

l \ 1," ( 1 ) _ (2)x

Wij = V ['Wik / \ Wkj ) k=l

iEN, j E M .

Thus, yj = Vi~=l(xi Awu) ( jEM) , i.e. Y = X o W. So (X, Y) EF(W), hence FI(W1, W2) C F(W). (1) holds. (2) Suppose l>>.m, and (X ,Y)EF(W) X = ( x l . . . . . Xn), Y=(Yl . . . . ,Ym), then for each j E M , y j=

n (1) . (2) for iEN, j E M , k = l . . . . . l as follows: Vi=l(xi A wij) holds. We define wik ,wky

wO)= [wik, k<~m, . (2) f 1, k<~m, k = j , i~ [. O, m < k ~< l, wkJ = { O, otherwise.

Then by (4), we obtain

(1) Vi=I (Xi h Wik), ok= (xi A Wik )= O,

i=1

k ~m,

m<k <~l.

Page 5: The fuzzy associative memory of max-min fuzzy neural network with threshold

P. Liu/Fuzzy Sets and Systems 107 (1999) 147-157 151

n ZW(I)~ SO Vk=l(OkAWkjl (2) '~=oj:Vi=l(XiAwij):y j ~ holds for each j E M . Therefore, if we denote W1 = t ik ),xl, [. (2)x W2 =t,~kj )lxm, then (X,Y)EFI(W1,W2), so F(W)cFI(W1,W2).

(1) , . (2) . If l >~ n, then we define W1 = (wik), × t, W2 = as twkj )l×m follows:

W(I) f 1, k~n, k=i, .(2) ~wkj, k<-Gn, ik = [ 0, otherwise, wkJ = [ 0, n < k ~< I.

n (1) 1 _ (2). With the same steps, we may prove Ok=Vi:l(xiAwik ), yj=Vk=l(Ok/\wij ) hold for every j E M , i.e. (X, Y) EFI(W1, W2). So F(W) cFI(W1, W2). Thus (2) is proved. []

By Theorem 2, in order to improve the storage capacity of the max-min FNN, what we must do is to ameliorate the connection fashion of the neurons of the max-min FNN, and not add the hidden layers of the network.

3. Max-min FNNs with threshold

In this section, we introduce a threshold at each unit of two-layer FNN shown in Fig. 1. Suppose ci E [0, 1] is the threshold of unit i (i E N) in input layer, and dj the threshold of unit j ( j E M) in output layer. The I/O relations are as follows:

yj= V((x, vc,)Awu) Vdj=V((xiVc, Vd )A(wuVdj))(j M). i=l i~l

With the fuzzy matrix W=(wij)nxm and fuzzy vector C=(cl ..... cn), D=(dl ..... din), the above formula can be rewritten as follows:

Y = ((X V C) o W) VD, (5)

We design a learning alorithm (6) for W, C,D in (5) as follows:

) ¢-- -- k if D , - O, C O __ ~ /\jEOi [\kcLEi i Yi 7: (6) [ i - - [ 0 " i fD i=O,

( d; ° = Ak~e Y;,

where Di = { j E M [ LE~j ~ 0}. Also let

TGu= {kEPlx~ Vc° Ad°> y;}, TE,j= {k~Plx~Vc°Vd°= y~}, (7)

TGEii = TGii U TEi/,

k .< o vdjo}. TSij= {k E TGE~/ I y j .~. w,j (8)

By the fact that TGEij D GE~j for each i E N, j E M and the definition of w ° and d °, we have d °J ~<w ° and the following holds:

k ~ 0~ TSi/:{kCTGEij[yj .~.wij b TSijDSij (i~N, jEM) . (9)

Page 6: The fuzzy associative memory of max-min fuzzy neural network with threshold

152 P. Liu/Fuzzy Sets and Systems 107 (1999) 147-157

We call the set

FT(W, C,D) -- {(X, Y) E [0, 1] n × [0, 1] m I X, Y satisfy (5)}

to be FAM space of FNN (5). For given fuzzy pattem pairs (Xk, Y~): X~=(x~ . . . . . x~), Yk=(y~ . . . . . y~) (kEP) , and W,C,D, if VkEP,

(Xk, Yk) EFT(W,C,D) , we have

y k = ~ / ((xk v ci V dj)/k(wij V dj)) ( j EM, k EP), (10) i=1

and write the set

WCD(P) = {(W, C,D)[ W = (wij), C = (ci), D = (dj), (10) holds for k E P}.

Theorem 3. Let W = (wij) E #n×m, C = (ci) E [0, 1] n, D = (dj) E [0, 1] m, and (W, C,D) E WCD(P); then for all i E N, j E M, we have

wij<~w °, d j < d °. (11)

Proof. At first, for a, b E [0, 1], we define the operator a as follows:

1 b ifa<~b, ac~b = if a > b.

By the above definition, it is obvious that for a,b, c E [0, 1],

aa(aAb)>~b, and i f b > c then aeb>~aec. (12)

We will only prove the first part of (11), for the second part is obvious by considering the fact that (10) holds for each k E P, and therefore dj ~< A~EP Y~ = do (J E M).

If Gij = 0, then w °. = 1, and our conclusions hold. So we suppose Gij # O, hence there is a k ~ E Gij, such

that y~' =min{yff ]k E Gij}. Thus,

w° = A Y~ = Y~' <x/k'" kcGi/

Therefore, by assumptions and (12), we have

o k' k' k'__xki'o~fXnl(xik'Vci, Vdj)A(wi,jVdj)'~>~x,'a(xki'Awij)>~wij. Wij= Y j =X i cxy) i v ) i'=l

Thus, the proof is completed. []

From now on, we assume wij to be the element of matrix W E#nxm, ci, dj the components of C E [0, 1] ", D E [0, 1] m, respectively.

By Theorem 3, the connection weight matrix W0, the threshold vector Do of the output layer obtained by learning algorithm (6) are, respectively, the largest ones which can store the given family of fuzzy patterns {(Ark, Yk)lk EP}. By the following theorem, similar properties hold for Co.

Page 7: The fuzzy associative memory of max-min fuzzy neural network with threshold

P. LiulFuzzy Sets and Systems 107 (1999) 147-157 153

Theorem 4. Let (W,C,D)E WCD(P); then there is a threshoM vector C1 = ( c l , . . . , c , I) such that Vi EN, c) <~c °, and (IV, CI,D0)E WCD(P).

Proof. Because (W,C,D)E WCD(P), the following holds:

y;=~V/((x~VciVdj)A(wuVdj)) (kEP, j E M ) . (13) i--I

For given i E N, we define e) with two cases. Case I: Di=O. We define e ) = 0 . Because VkEP, jEM, xik>y~, considering (6) and Theorem 3, we

obtain x~i " > w ° V d°1 >~ wij V dj, and consequently,

Di = 0 :~> (x t V ci V dj ) A (wij V dj ) =w i j V dj <<.(xki V c) Vd °) A (wij V d°)<~y~. (14)

Case 1I: O i ~L O. We let c ) = AjEM, kGP Y}, and denote the set

Oi---- {(k,j) EP x M Ic i > y;}.

We may conclude that

Oi¢O=V VjEM, wij<~A y ~. (15) kEP

In fact, if k E P, j E M satisfy (k,j) E Oi, then by (13), w,j ~< y~ must hold. So w(i <~ Vkt(k,i)c o, Y~. Because of the fact that

( k l , j l )CO i , (k2,j2)(~Oi ::=> y;1 <ci<~y;2,

Akl(kd)~o~ Y~ ~< Akl(k,j)~ o, Y~ holds, which implies

)( ) A . # v A g = A kEP kl(k,j)EO, kl(k,j)f~oi kl(k,j)EO~

Therefore, w(i <. AkcP Y~, i.e. (15) holds. Thus, Vk EP, j EM,

oi¢0 (# vei va,),,(wi, vd,).< A Y~ <<'(x*iVc) Vd°)A(wijVd°)<~Y~" (16) kEP

If Oi=(~, then VkEP, jEM, ci<~b~, which implies ci<~ AkEP, jEMY~ =C). SO VkEP, j E M ,

o , = 0 ~ (xt v e, v dj) A (wu v d,).<(x~ v d v # ) A (wu v # ) . < y~.

Considering (16) and (17), we obtain

Di # O =2z (x t V ci V dj ) A (wij V dj ) <. (xki V e] V d°) A (wij V d°) <~ y k.

1 We define C1 = (c I . . . . . c,) as follows:

c)=~Akcp,/cMy ;, O,#0, L O, Di=O.

(17)

(18)

Page 8: The fuzzy associative memory of max-min fuzzy neural network with threshold

154 P. LiulFuzzy Sets and Systems 107 (1999) 147-157

By (6), it is obvious that Vi E N, c] ~< c °. By (13), (14) and (18), the following holds for every j E M, k E P:

y; = V ((z/~ Vc, Vdj)A(wu vdj)) i6N

( x 0 ( Vo il \ ilDi#O

-- V ((x~ v 4 v # ) A (wu v d)))~yj.° < iEN

SO V i E N ( ( X t V C ] V d O ) A ( w i j V d O ) ) = y k (kEP, j C M ) , i.e. ( ( X V C , ) o W ) V D o = Y . So (W, C1,Do)E WCD(P). []

To obtain the equivalent conditions that the FAM space of (8) contains the given fuzzy patterns, we at first prove the following lemma.

Lemma 1. Let j E M, k C P, and

~,= V ((xtv~Ov4)~(w°v#)), ~= V ((x~ivo°v4)A(w°v4)) i[k~TS 0 ilkGTS u

then ~1 < Yff, ~2 = Y~"

Proof. If k CA TSij, either x/k V c°V d ° < y} or x/k V c°V dO>_- y~ > w ° V d °, which implies (xf V c°V d°)A (w °. V dO)< y}. Therefore 2~ < y~ by the definition of 21.

If k E TSij, either xki V c o Vd)° _y)_ k <~wi ]o V d ° or x/k V c o V d ° > y}, y} ~< w °. V d °. Since -0<aj ..~ y),k then x/k V c o V d ° > y} implies x/k > y} must hold. Otherwise, xp ~ y~ implies c o ( y} by the definition of c °, therefore x/k V c o V d ° ~ y}, this is a contradiction. By the definition of w ° and d °, we have y~ ~> w ° V d °. Thus, in the second case, y} = w °. V d °. Hence when k C TSij

(xtvc°v4)~(w°v4)= £

must hold, i.e. we have ,~2--y~. []

Theorem 5. For fuzzy pattern pairs (Xk, Yk) (k c P), the necessary and sufficient conditions of WCD(P) # ~) are that for any j E M, UicN TSij = P.

Proof. Necessity: Let W = (wij) C JAn×m, C = (Ci) E [0, 1] n, D = (dj) E [0, 1] m, and (W, C,D) E WCD(P). If our conclusions do not hold, there is a ]0 E M, such that Ui~N TSijo # P, so there exists a k E P, and for each

Page 9: The fuzzy associative memory of max-min fuzzy neural network with threshold

P. LiulFuzzy Sets and Systems 107 (1999) 147-157 155

i E N, k f~ TSijo. Thus, for each i E N, either k E TLijo or k E TGEUo and Y~0 > woo V d ° . Therefore

v .x;vcrv.S,,,<vdS,,) i E N ilk q) TGEoo

(19)

Considering the assumptions and Theorem 4, there is a vector C1 = (cil . . . . . c,1 ).. Vi E N, c) <~c °, such that (W, CI,Do)E WCD(P). So by (19) and Theorem 3, we have

go = V ((xt vc, Vd, v d,o)) =V ((#vd v (W,,o vd )) i6N i@N

iEN

a contradiction, thus necessity holds. Sufficiency: For each j E M, k E P, there exists i E N, such that k E TSij. So if let wij = w °, c, = c °, dj = d °,

then by Lemma 1 we have

V ((x, ~ v ~, v dj) A (wu v dj)) i6N

V ((x;vo°v#)"(w°v#))=yh iIkETS,/

Therefore, if Wo = (w°.), Co = (cO), Do = (dO), then (Wo, Co,Do) E WCD(P), and WCD(P) # 0 which proves the theorem. []

The following theorem shows that Theorem 5 generalizes Fan's methods [6].

Theorem 6. For fuzzy pattern pairs (Xk, Y~) (k E P), /f there exists W E I*~ × m such that Xk o W = Yk (k E P), and W0=(w°.), C0 =(c°), D0=(d°) , then (Wo, Co,Do)E WCD(P).

Proof. By assumptions, W ( P ) # 0, thus Theorem 1 implies that for each j E M, UiEN Sij =P. Because of (9), we may obtain for each j E M , UiEwTSq=P. Thus by Theorem 4, WCD(P)#O; moreover (Wo, Co, Do)E WCD(P). The conclusions are proved. []

By Theorem 6, if a family of fuzzy pattern pairs is contained in the FAM space of (2) with Fan's learning algorithm, then these fuzzy pattern pairs must lie in FAM space of (4) with learning algorithm (6). Therefore, FNN model (5) can store more fuzzy patterns than (2) can, consequently (5) is more applicable than (2).

Page 10: The fuzzy associative memory of max-min fuzzy neural network with threshold

156 P. Liu/Fuzzy Sets and Systems 107 (1999) 147-157

4. E x a m p l e

In the section, we use learning algorithm (6) about W,C,D such that the given fuzzy patterns which are not stored in (2) can be stored in (5). Through the example, we may conclude that max-min FNN (5) with threshold is more effective for storing fuzzy patterns than max-min FNN (2). So (5) is more applicable in practice.

Suppose N={1 ,2 ,3 ,4 ,5} , M = { 1 , 2 , 3 } , P : { 1 , 2 . . . . . 8}, and fuzzy pattern pairs (Xk, Yk) ( k E P ) are as shown in Table 1.

We realize the learning algorithm (6) through the following schemes: Step 1: for each i E N, j E M, determine the set Gij,LEij; Step 2: for i EAr, j E M, calculate the values of ci,d),° o. Step 3: for every i EN, j EM, calculate the value of w°.; Step 4: for each i EN, j EM, determine the set TGij, TEij, TGEij and TSij; Step 5: for every jEM, decide whether Ui~N TS~j=P holds, if so, go to Step 6, otherwise, go to Step 7;

c O Step 6: write down the values of W0 =(w°) , Co--(i), D0 =(dj°); Step 7: stop. By the above schemes, we may obtain Co = (0.3, 0.3, 0.3, 0.3, 0.3), Do = (0.5, 0.6, 0.3), the threshold vectors

of input units and output units, respectively, the connection weight matrix Wo as follows:

w0 = 1.0) 0.5 1.0 1.0 0.6

1.0 1.0 1.0 1.0 1.0 .

0.3 0.3 0.3 0.3 1.0

We may easily show that the given fuzzy pattern pairs (Xk, Yk) ( k = 1 . . . . . 8) satisfy the conditions that for each jEM, UiEN TSij=P. So by Theorem 5, the family of fuzzy patterns (Xk, Yk) ( k = 1 . . . . . 8) can be completely stored by our FNN model. With Fan's methods, we may compute the connection matrix W1 ---- W0. Obviously, only the fuzzy pattern pairs (X3, Y3 ), 0(5, Y5 ), (XT, Y7), (Xs, Y8 ) can be stored, and (3(1, Y1 ), (X2, Y2), (Xa, Ya),(X6, Y6) cannot be stored. If we use fuzzy Hebbian rule wij= Vk~p(xkiAy~), then W=(wij) is as following:

W T =

0 .5) 0.6 0.5 0.7 0.7

0.5 0.8 0.5 0.7 0.7

0.5 0.4 0.5 0.5 0.5

Obviously, only fuzzy pattern 0(5, Ys) can be stored in the learning matrix W obtained by fuzzy Hebbian rule. So the storage capacity of the max-min FNN with fuzzy Hebbian rule is very low.

Table 1

k Xk rk

1 (0.5,0.5,0.4,0.4,0.3) 2 (0.1,0.3,0.3,0.4,0.4) 3 (0.8, 0.4, 0.6, 0.7, 0.4) 4 (0.3, 0.4, 0.4, 0.3, 0.4) 5 (0.6, 0.4, 0.7, 0.7, 0.5) 6 (0.1,0.1,0.2,0.2,0.1) 7 (0.7, 0.2, 0.4, 0.3, 0.2) 8 (0.8, 0.4, 0.3, 0.4, 0.2)

(0.5,0.6,0.3) (0.5, 0.6, 0.4) (0.6, 0.8, 0.4) (0.5, 0.6, 0.4) (0.7,0.7,0.5) (0.5,0.6,0.3) (0.5,0.7,0.3) (0.5,0.8,0.3)

Page 11: The fuzzy associative memory of max-min fuzzy neural network with threshold

P. Liu/Fuzzy Sets and Systems 107 (1999) 147-157

Acknowledgements

I would like to thank the members in our subject group for many good suggestions to this paper.

157

References

[1] A. Blanco, M. Delgado, I. Requena, Identification of fuzzy relational equations by fuzzy neural networks, Fuzzy Sets and Systems 71 (1995) 215-226.

[2] J.J. Buckley, Y. Hayashi, Can fuzzy neural nets approximate continuous fuzzy functions? Fuzzy Sets and Systems 61 (1994) 43-51. [3] J.J. Buckley, Y. Hayashi, Fuzzy neural networks: a survey, Fuzzy Sets and Systems 66 (1994) 1-13. [4] F.L. Chung, T. Lee, On fuzzy associative memory with multiple-rule storage capacity, IEEE Trans. Fuzzy Systems 4 (3) (1996)

375-384. [5] J.B. Fan, F. Jin, X. Yuan, A learning algorithm for multiple fuzzy pattern pair associative, Proc. IJCNN'92, vol. 3, 1992. [6] J.B. Fan, F. Jin, X. Yuan, An efficient learning algorithm for fuzzy associative memories, Acta Electr. Sinica 24 (1996) 112-114. [7] S. Grossberg, Some networks that can learn, remember, and reproduce any number of complicated space-time patterns II, Stud.

Appl. Math. 49 (1970) 135-160. [8] F.D. He, A new self-adaptive learning rule for multiple pattern pairs fuzzy associative memory, J. Southwest Jiaotong University,

(4) (1993) 13-17. [9] H. Ishibuchi, K. Kwon, H. Tanaka, A learning algorithm of fuzzy neural networks with triangular fuzzy weights, Fuzzy Sets and

Systems 71 (1995) 227-293. [10] B. Kosko, Fuzzy associative memories, in: A. Kandel (Ed.), Fuzzy Expert Systems, Addison-Wesley, Reading, MA, 1987. [11] B. Kosko, Bidirectional associative memories, IEEE Trans. SMC 18 (1988) 49-60.