[ieee iecon 2013 - 39th annual conference of the ieee industrial electronics society - vienna,...

6
Implementation of Quadric Perceptron with Hardlims Activation Function in a FPGA for Nonlinear Pattern Classification Raymundo Cordero García, Walter Issamu Suemitsu Department of Electrical Engineering - COPPE Federal University of Rio de Janeiro Rio de Janeiro, Brazil [email protected], [email protected] João Onofre Pereira Pinto, André Muniz Soares Laboratory of Digital Systems and Power Electronics Federal University of Mato Grosso do Sul Mato Grosso do Sul, Brazil [email protected], [email protected] Abstract—This paper deals with the design and implementation of an artificial neural network for pattern classification in the FPGA EP2C20F484C7. A perceptron with quadratic decision boundary is used as nonlinear classification system, but using a hardlims as activation function, instead of a sigmoid function. The training algorithm is similar to the used in conventional perceptron. The elimination of the sigmoid function makes simpler the implementation of quadratic perceptrons. As the mentioned FPGA does not do neither float-point nor fixed- point multiplications, the synaptic weights were normalized to integers. The proposed quadratic perceptron is tested in a set of classification problems and compared with multilayer perceptron. Example of experimental implementation of the proposed classification system is shown, including parameters about computational cost. Keywords—Artificial neural networks, decision boundary, FPGA, multilayer perceptron, quadratic perceptron. I. INTRODUCTION Pattern classification is a main problem in the field of artificial intelligence: it has applications in electrical engineering, medicine, statistics, economy and others [1]-[2]. On the other hand, Artificial Neural Networks (ANN) are robust systems against noisy and lost data, can model linear and non-linear process and have the capability of interpolation and extrapolation [2], being suitable for the implementation of pattern classification systems. Additionally, ANN does a parallel distributed analysis, which is also one of the main advantages of a FPGA, reducing considerably the processing time of information. Another advantage of using FPGA to implement ANN is that makes possible the fabrication of practical and commercial circuits based on ASICs. However, many ANN has mathematical functions as sigmoid which are difficult to calculate in FPGA. As a result, many researches are done to improve the implementation of ANNs in FPGAs [3]-[4]. This paper deals specifically with the implementation of pattern classification systems in the FPGA EP2C20F484C7 of ALTERA. Perceptron is the simplest neural network. It consists in a unique artificial neuron with hardlim (0 or 1) or hardlims (-1 or 1) as activation function [5]. It only can be used for linearly separable classification problems. One alternative for deal with non-linear classification problems is the use of multilayer perceptrons (MLP) [6]. This neural network is composed by many artificial neurons with linear or sigmoid activation function arranged in many layers. It can create more complex decision surface than hyperplanes. However, the determination of the number of layers, the number of the neurons in each layer and the activation functions depend on the application. Other researches worked with the structure of the perceptron. In [7], it is considered a conic decision surface, based on Clifford algebra. However, the estimation of the synaptic weights is extremely complicated. Quadratic perceptron is an artificial neuron which has a polynomial of order two as a decision boundary, having a sigmoid activation function and use delta rule for training [8]- [11]. This perceptron has the ability to form quadratic surfaces as decision boundary. Only one quadratic perceptron is needed to solve XOR problem, instead of three linear perceptrons integrated in a multilayer perceptron [11]. One of the disadvantages is the quantity of terms generated by the quadratic terms x i x j . The other disadvantage is that this perceptron needs a sigmoid function, which has high computational cost, especially for generating the exponential function. Other advantage of using quadratic perceptron is that it is not necessary the selection of the activation function for training procedure, as in MLP. Implementations of linear perceptrons in FPGA are described in [12], [13], but no literature were founded about hardware implementation of quadratic perceptrons in FPGA. Classification systems implemented in hardware are focused in MLP, support vector machine and decision tree [14]-[15]. The main disadvantage of these algorithms is the high computational cost required for implement the exponential and sigmoid functions, which are difficult to implement in a FPGA [13]. A quadratic classification boundary is easier to implement, due to it is only composed by polynomial terms. The researches of this works consider that quadratic perceptron is a simple and powerful tool in the implementation the non- linear classification systems in FPGA. Therefore, its implementation must be analyzed. 978-1-4799-0224-8/13/$31.00 ©2013 IEEE 2432

Upload: andre-muniz

Post on 11-Apr-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society - Vienna, Austria (2013.11.10-2013.11.13)] IECON 2013 - 39th Annual Conference of the IEEE Industrial

Implementation of Quadric Perceptron with Hardlims Activation Function in a FPGA for Nonlinear Pattern

Classification

Raymundo Cordero García, Walter Issamu Suemitsu Department of Electrical Engineering - COPPE

Federal University of Rio de Janeiro Rio de Janeiro, Brazil

[email protected], [email protected]

João Onofre Pereira Pinto, André Muniz Soares Laboratory of Digital Systems and Power Electronics

Federal University of Mato Grosso do Sul Mato Grosso do Sul, Brazil

[email protected], [email protected]

Abstract—This paper deals with the design and implementation of an artificial neural network for pattern classification in the FPGA EP2C20F484C7. A perceptron with quadratic decision boundary is used as nonlinear classification system, but using a hardlims as activation function, instead of a sigmoid function. The training algorithm is similar to the used in conventional perceptron. The elimination of the sigmoid function makes simpler the implementation of quadratic perceptrons. As the mentioned FPGA does not do neither float-point nor fixed-point multiplications, the synaptic weights were normalized to integers. The proposed quadratic perceptron is tested in a set of classification problems and compared with multilayer perceptron. Example of experimental implementation of the proposed classification system is shown, including parameters about computational cost.

Keywords—Artificial neural networks, decision boundary, FPGA, multilayer perceptron, quadratic perceptron.

I. INTRODUCTION Pattern classification is a main problem in the field of

artificial intelligence: it has applications in electrical engineering, medicine, statistics, economy and others [1]-[2]. On the other hand, Artificial Neural Networks (ANN) are robust systems against noisy and lost data, can model linear and non-linear process and have the capability of interpolation and extrapolation [2], being suitable for the implementation of pattern classification systems.

Additionally, ANN does a parallel distributed analysis, which is also one of the main advantages of a FPGA, reducing considerably the processing time of information. Another advantage of using FPGA to implement ANN is that makes possible the fabrication of practical and commercial circuits based on ASICs. However, many ANN has mathematical functions as sigmoid which are difficult to calculate in FPGA. As a result, many researches are done to improve the implementation of ANNs in FPGAs [3]-[4]. This paper deals specifically with the implementation of pattern classification systems in the FPGA EP2C20F484C7 of ALTERA.

Perceptron is the simplest neural network. It consists in a unique artificial neuron with hardlim (0 or 1) or hardlims (-1 or

1) as activation function [5]. It only can be used for linearly separable classification problems. One alternative for deal with non-linear classification problems is the use of multilayer perceptrons (MLP) [6]. This neural network is composed by many artificial neurons with linear or sigmoid activation function arranged in many layers. It can create more complex decision surface than hyperplanes. However, the determination of the number of layers, the number of the neurons in each layer and the activation functions depend on the application. Other researches worked with the structure of the perceptron. In [7], it is considered a conic decision surface, based on Clifford algebra. However, the estimation of the synaptic weights is extremely complicated.

Quadratic perceptron is an artificial neuron which has a polynomial of order two as a decision boundary, having a sigmoid activation function and use delta rule for training [8]-[11]. This perceptron has the ability to form quadratic surfaces as decision boundary. Only one quadratic perceptron is needed to solve XOR problem, instead of three linear perceptrons integrated in a multilayer perceptron [11]. One of the disadvantages is the quantity of terms generated by the quadratic terms xixj. The other disadvantage is that this perceptron needs a sigmoid function, which has high computational cost, especially for generating the exponential function. Other advantage of using quadratic perceptron is that it is not necessary the selection of the activation function for training procedure, as in MLP.

Implementations of linear perceptrons in FPGA are described in [12], [13], but no literature were founded about hardware implementation of quadratic perceptrons in FPGA. Classification systems implemented in hardware are focused in MLP, support vector machine and decision tree [14]-[15]. The main disadvantage of these algorithms is the high computational cost required for implement the exponential and sigmoid functions, which are difficult to implement in a FPGA [13]. A quadratic classification boundary is easier to implement, due to it is only composed by polynomial terms. The researches of this works consider that quadratic perceptron is a simple and powerful tool in the implementation the non-linear classification systems in FPGA. Therefore, its implementation must be analyzed.

978-1-4799-0224-8/13/$31.00 ©2013 IEEE 2432

Page 2: [IEEE IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society - Vienna, Austria (2013.11.10-2013.11.13)] IECON 2013 - 39th Annual Conference of the IEEE Industrial

This paper describes the implementation of a quadratic perceptron in the FPGA EP2C20F484C7. A hardlims activation function is used instead of a sigmoid function in order to simplify the implementation of the perceptron. The capacity of quadratic perceptron with hardlims activation function as a classification system is tested in different classification problems and compared with the performance of MLPs. Simulation and experimental results validate the advantages of using the proposed quadratic perceptron in pattern classification.

II. LINEAR PERCEPTRON A binary classification system determines if a specific data

belongs to one of the two possible categories (H1 or H2), based on the analysis of m characteristic xi (i = 1, …, m). These variables can be represented through a vector X in an m-dimensional Euclidean space:

[ ]TmxxxX …21= (1)

Conventional perceptron are adequate for linearly separable classification problems. Its structure is shown in (2):

cWXu += (2)

( )ufY = (3)

Where W = [w1 w1 … wm]T is the vector with the synaptic weights, c is the bias, u is the excitation input, and f(u) is the hardlims activation function which determine its output Y:

( )⎩⎨⎧

<−>

=0 ,1

0 ,1u

uuf (4)

The set of point in the Cartesian plane such as WX + c = 0 is called decision surface or decision boundary [2], which has the structure of a hyperplane. A training algorithm is necessary in order to adjust the values of the synaptic weights in order to classify the data properly. Rossenblatt [2] defines a simple technique for this training: Being e(k) the error between the desired output d(k) and the estimated output y(k), the synaptic weights are updated as follows:

( ) ( ) ( )kykdke −= (5)

( ) ( ) ( ) ( )

( ) ( ) ( )kekckc

kkekk

η

η

+=+

+=+

1

1 xww (6)

Where k represents the instant of time. The training algorithm in (6) converges if there is a hyperplane that classifies the data properly [2].

III. PROPOSED QUADRATIC PERCEPTRON

A. Structure of the Proposed Quadratic Perceptron A conventional quadratic perceptron has a polynomial of

order two as decision boundary [8]:

⎟⎟

⎜⎜

⎛∑ ++∑==

==

m

iii

m

ji

jiij cxbxxafy1

11

(7)

Where f(u) is the sigmoid function:

( )aue

uf−+

=1

1 (8)

One of the disadvantages is the quantity of terms generated by the quadratic terms xixj. The other disadvantage is that this perceptron needs a sigmoid function, which has high computational cost, especially for generating the exponential function. Conventional quadratic perceptrons are usually trained using delta rule [8].

This research proposes the substitution of the sigmoid function by a hardlims, for pattern classification, which is simpler to implement in a FPGA. The decision boundary of quadratic perceptron is:

01

11

=++ ∑∑=

==

cxbxxaN

iii

N

ji

jiij (9)

Equation (9) can be expressed in matrix notation [17]:

0

0111

1

1111

=++

=++⎥⎥

⎢⎢

⎥⎥

⎢⎢

⎥⎥

⎢⎢

⎥⎥

⎢⎢

⎥⎥

⎢⎢

cXBAXX

cx

x

b

b

x

x

aa

aa

x

x

TT

n

T

nnnnn

nT

n (10)

The matrix A in (10) is symmetric or can be expressed using a symmetrical matrix [17]. The proposed quadratic perceptron has the following structure:

( )⎩⎨⎧

−>++

=otherwise ,1

0 ,1 cXBAX Xuf

TT (11)

B. Training System of the Proposed Quadratic Perceptron In this work, the updating equation for the proposed

perceptron are based on the consideration that the quadratic, linear and constant terms are independent variables in (9):

2433

Page 3: [IEEE IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society - Vienna, Austria (2013.11.10-2013.11.13)] IECON 2013 - 39th Annual Conference of the IEEE Industrial

[ ] [ ] cx

xbb

xx

xxaa

cxbxxa

m

m

mm

mm

m

iii

m

ji

jiij

++

++

⎥⎥

⎢⎢

⎥⎥

⎢⎢

∑∑=

==

……1

1

11

11

111

(12)

Applying sub-matrix algebra in (12) [2]:

[ ] c

m

mmmmm

x

x

xx

xx

xx

bbaaa +

⎥⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢⎢

……

1

21

11

11211 (13)

Equations (13) and (2) have the same structure, considering xixj as new input variables. For analogy:

( ) ( ) ( )( ) ( ) ( )

( ) ( ) ( )kekckc

xkekbkb

xxkekaka

η

η

η

+=+

+=+

+=+

1

1

1

iii

jiijij

(14)

The training algorithm for the proposed quadratic perceptron can also be represented in matrix notation, by arranging properly the terms in (14):

( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )kekckc

XkekBkB

TXXkekAkA

η

η

η

+=+

+=+

+=+

1

1

1

(15)

As in linear perceptron, the training algorithm in (15) converges if there is an m-dimensional quadratic hypersurface that successfully separates the input data in their respective categories.

IV. CLASSIFICATION EXAMPLES This section shows the application of the proposed quadric

perceptron in nonlinear pattern classification problems. The artificial neural networks were designed in MATLAB. 70% of data were used for training. A MLP with a hidden layer conformed by 10 neurons with Tansig activation function is used to compare the performance of the proposed quadratic perceptron. Each parameter of the input data was normalized from -1 to 1, in order to improve the training algorithm [2].

In the case of the proposed quadratic perceptron, after the training process, the synaptic weights and the input data were multiplied by a scale factor in order to be represented as integers of 16 bits. This conversion from decimal to integer was done due to the FPGA EP2C20F484C7 can not do multiplications using fixed point or float point numbers.

A. Classification of Iris Flower The proposed quadratic perceptron is used to classify three

types of iris flowers, based on four characteristics: sepal length, sepal width, petal length and petal width. There are 150 samples, 50 for each type. In this case, three quadratic perceptrons were designed (one for each category).

Classification results for all samples, for MLP and quadric perceptron represented with integers are shown in Fig. 1 and Fig. 2, respectively. These results indicate that the success ratio for MLP is 92.6%, and it is 96% in the case of the proposed quadratic perceptron. In consequence, the proposed classificatory has more accuracy than MLP in this example.

0 50 100 1500

0.5

1

Classification result for class 1

Sample

Suc

ess

0 50 100 1500

0.5

1Classification result for class 2

Sample

Suc

ess

0 50 100 1500

0.5

1Classification result for class 3

Sample

Suc

ess

Fig. 1. Classification results of the MLP for flower recognition. Success = 1 means that the classification of the data is correct. Otherwise, success = 0.

0 50 100 1500

0.5

1

Classification result for class 1

Sample

Suc

ess

0 50 100 1500

0.5

1Classification result for class 2

Sample

Suc

ess

0 50 100 1500

0.5

1Classification result for class 3

Sample

Suc

ess

Fig. 2. Classification results of the proposed quadratic for flower recognition. Success = 1 means that the classification of the data is correct. Otherwise, success = 0.

2434

Page 4: [IEEE IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society - Vienna, Austria (2013.11.10-2013.11.13)] IECON 2013 - 39th Annual Conference of the IEEE Industrial

B. Classification of Breast Cancer The second classification problem is the detection of benign

or malign breast cancer using nine characteristics about a sample biopsy: clump thickness, uniformity of cell size, uniformity of cell shape, marginal adhesion, single epithelial cell size, bare nuclei, bland chomatin and normal nucleoli. In this case, there are 699 samples. The results of the classification task are presented in Fig. 3 and Fig. 4.

0 100 200 300 400 500 600

0

0.2

0.4

0.6

0.8

1

Classification result for class 1

Sample

Suc

ess

Fig. 3. Classification results of the MLP for breast cancer detection. Success = 1 means that the classification of the data is correct. Otherwise, success = 0.

0 100 200 300 400 500 600

0

0.2

0.4

0.6

0.8

1

Classification result for class 1

Sample

Suc

ess

Fig. 4. Classification results of the proposed quadratic perceptron for breast cancer detection. Success = 1 means that the classification of the data is correct. Otherwise, success = 0.

The success ratio of the MLP and the proposed classification system are 96.99% and 97.84%, respectively.

C. Theoretical Classfication Problem Fig. 5 shows the classification boundary for a theoretical

binary classification problem. The input data is composed by eight data with the following format (x1, x2, y), where x1 and x2, are the characteristics of the data, while y is the category (expressed as the values -1 or 1). The data are: I1 = {0.8, 0.8, 1}; I2 = {0.8, -0.8, 1}; I3 = {-0.8, 0.8, 1}; I4 = {-0.8, -0.8, 1}; I5 = {0.2, 0.9, -1}; I6 = {0.2, -0.9, -1}; I7 = {-0.2, 0.9, -1} and I8 =

{-0.2, -0.9, -1}. One quadratic perceptron can classify properly these data. The decision boundary is a hyperbola.

x1

x2

Decision hypersurface

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Fig. 5. Decision hypersurface for the third classification problem.

V. IMPLEMENTATION OF QUADRIC PERCEPTRON IN FPGA The proposed quadric perceptron was implemented in the

FPGA EP2C20F484C7, according to (11). The matrices Ai, Bi, and ci (i = 1, 2 and 3) are:

[ ]49394826

719767399575380591184146

10871054177451105425524592154

177459147916945121541691579

1

1

1

−=

−−−=

−−=

⎥⎥⎥⎥

⎢⎢⎢⎢

cB

A

(16)

[ ]140532049

658098907518255201388454

24297756395392355756314126158339302

95391583311601181223559302118127764

1

1

1

=

−−=

−−−−−−

−−−−

=

⎥⎥⎥⎥

⎢⎢⎢⎢

cB

A

(17)

[ ]174188382

151529918688074726902085564

1802632612329197032682581203160241232912031290013744197060243744'18967

3

3

3

−=

−−=

−−

−−−−−−

=

⎥⎥⎥⎥

⎢⎢⎢⎢

cB

A

(18)

2435

Page 5: [IEEE IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society - Vienna, Austria (2013.11.10-2013.11.13)] IECON 2013 - 39th Annual Conference of the IEEE Industrial

Fig. 6 shows the resources used in the implementation of the quadratic perceptron in the FPGA. In general, it uses 51% of the logic elements, 7 registers and 52 embedded multipliers. Fig. 7 shows the experimental setup for the implementation and test of the quadratic perceptron. No especial technique was used to implement the arithmetic operations.

Fig. 6. Resources used for the implementation of the quadratic perceptron for iris flower recognition.

A set of nine input data, three of each category, was used for experimental test. The data were arranged to be disposed as follows: class 1, class 2, class 3, class 1, class 2, class 3, class 1, class 2 and class 3, in order to be easy to verify. This cycle is repeated continuously. A clock signal of 10 kHz is used to put new data in the classification system.

Fig. 8 shows, from top to bottom, the clock signal, and the classification results for the quadric perceptrons for the first, second and third categories, respectively. The classification results are also cyclic: class 1, class 2, class 3, class 1, class 2, class 3, class 1, class 2 and class 3. This fact proves that the quadric perceptron implemented in FPGA is classifying the data properly.

Fig. 9 indicates that FPGA need 40ns to generate a new classification response. This delay is produced principally by the multiplication required to calculate the quadratic function.

Fig. 7. Experimental setup.

Fig. 8. Experimental classification results for the quadric perceptrons.

Fig. 9. Delay in the response of the perceptrons.

The quadratic perceptron for the theoretical classification problem was also implemented in the FPGA. The coefficients of the quadratic hypersurface (implemented as integers of 16 bits) are:

[ ]42414

4

4

2678196677676076022247

−=

=

=

⎥⎦

⎤⎢⎣

⎡−

cB

A

(19)

Fig. 10 shows the resources used in the implementation of the quadratic perceptron in the FPGA. In general, it uses 2% of the logic elements, 32 registers and 8 embedded multipliers. The number of multiplications depends on the number of variables in each input.

Fig. 11 shows the experimental results. It was also used a clock of 10 kHz. The input data was inserted from I1 to I8 in sequence according to the synchronization signal (upper signal). The data is successfully classified. The delay was calculated in 1 ns.

2436

Page 6: [IEEE IECON 2013 - 39th Annual Conference of the IEEE Industrial Electronics Society - Vienna, Austria (2013.11.10-2013.11.13)] IECON 2013 - 39th Annual Conference of the IEEE Industrial

Fig. 10. Resources used for the implementation of the quadratic perceptron for theoretical classification problem.

Fig. 11. Experimental result for theoretical classification problem

VI. CONCLUSSIONS The proposed quadric perceptron using hardlims activation

function can be easily implemented in FPGA due to only need additions and multiplications. The classification examples demonstrate that the proposed perceptron is a powerful alternative in the design of classification systems. More complex classification problems can be solved using the proposed classification system.

ACKNOWLEDGMENT Authors want to thanks BATLAB Laboratory – Federal

University of Mato Grosso do Sul and CNPq – Brazil by the support to this research.

REFERENCES [1] T. Orlowska-Kowalska and M. Kaminski, “FPGA implementation of the

multilayer neural network for the speed estimation of the two-mass drive system,” IEEE Trans. Ind. Informat., vol. 7, no. 3, pp. 436-445, Aug. 2011.

[2] S. Haykin, Neural Networks – A Comprehensive Foundation, Prentice Hall, 1999.

[3] N. Chalhoub, F. Muller and M. Auguin, “FPGA-based generic neural network architecture,” International Symposium on Industrial Embedded Systems – IES’06, pp. 1-4, Oct. 2006.

[4] R. N. A. Prado. J. D. Melo, J. A. N. Oliveira and A. D. D. Neto, “FPGA based implementation of a Fuzzy neural network modular architecture for embedded systems,” The 2012 International Joint Conference on Neural Networks – IJCNN, pp. 1-7, June 2012.

[5] Z. Yanling, D. Bimin and W. Zhanrong, “Analysis and study of perceptron to solve XOR problem,” 2nd International Workshop on Autonomous Decentralized System, pp. 168-173, Nov. 2002.

[6] B.-L. Lu, Y. Bai, H. Kita and Y. Nishikawa, “An efficient multilayer perceptron for pattern classification and function approximation,” Proc. 1993 International Joint Conference on Neural Networks – IJCNN 1993, vol. 2, pp. 1385-1388, Oct. 1993.

[7] I. B. Nieto and J. R. Varejo, “A decision boundary hyperplane for the vector space of conics using a polinomial kernel in m-euclidean space,” IEEE International Joint Conference on Neural Networks – IJCNN 2008, pp. 1273-1278, June 2008.

[8] T.-H. Su, C.-L Liu and X.-Y. Zhang, “Perceptron learning of modified quadratic discriminant function,” International Conference on Document Analysis and Recongnition – ICDAR, pp. 1007-1011, Sept. 2011.

[9] Y.-H. Tseng and J.-L. Wu, “Solving sorting and related problems by quadratic perceptrons,” Electronic Letters, vol. 28, no. 10, pp. 906-908, May 1992.

[10] G. M. Georgiou, “Exact interpolation and learning in quadratic neural networks,” International Joint Conference on Neural Networks – IJCNN’06, pp. 230-234, 2006.

[11] K. F. Cheung and C. S. Leung, “Rotational quadratic function neural networks,” IEEE International Joint Conference on Neural Networks, vol. 1, pp 869-874, Nov. 1991.

[12] O. Cardenas, G. Megson and D. Jones, “A new organizatin for a perceptron-based branch predictor and its FPGA implementation,” Proc. IEEE Annual Symposium on Computer Society, pp; 305-306, May 2005.

[13] W. Qinruo, Y. Bo, X. Yun and L. Bingru, “The hardware structure design of perceptron with FPGA implementation,” IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 762-767, Oct. 2003.

[14] Y. Jewajinda and P. Chogstitvatana, “FPGA-based online-learning using parallel genetic algorithm and neural network for ECG signal classification,” International Conference on Electrical Engineering/Electronics Computer Telecommunications and Information Technology – ECTI-CON, pp. 1050-1054, May. 2010.

[15] M. Papadonikolakis and C.-S. Bouganis, “A novel FPGA-based SVM Classifier,” International Conference on Field-Programable Tchnology- FPT, pp. 283-286, Dec. 2010.

[16] M. A. Tahir and A. Bouridane, “An FPGA based coprocessor for cancer classification using nearest neighbour classifier,” IEEE International Conference on Acoustics, Speech and Signal Processing – ICASSP, pp. 1012-1015, May 2006.

[17] B. Y. Chen, F. Dillen and H. Z Song, “Quadric hypersurface on finite type,” Colloquim Mathematicum, vol. 63, no. 2, pp. 145-152, 1992.

I1 I8

Synchronization

2437

Powered by TCPDF (www.tcpdf.org)