review: competitive learning algorithm of neural network

6
Review: Competitive Learning Algorithm of Neural Network 1 Jithendra Singh Sengar, 2 Niresh Sharma 1 M.Tech(CSE) RKDF IST Bhopal [email protected] 2 HOD Department of CSE RKDF IST Bhopal Abstract This paper deals mainly with the development of new learning algorithms and the study of the dynamics of neural networks. This survey paper will cover issues about the unsupervised competitive learning as well as some of its variances like hard competitive learning and soft competitive learning. After introducing of unsupervised learning algorithms, we will discuss the other competitive learning methods and motivations and goodness as well as the weakness of this model. Paper focuses on the evolution of learning algorithm of neural network using competitive learning, Furthermore, we compare the performance of it with other competitive learning that have appeared in the past. In the last of this paper, we conclude the survey of competitive methods Keywords: Neural Network, Unsupervised Learning Algorithms, Competitive Methods of learning. I. INTRODUCTION Learning is a process by which the free parameters of a neural network [3] are adapted through a continuing process of stimulation by the environment in which the network is embedded. The type of learning is determined by the manner in which the parameter changes take place. In unsupervised learning [7] there is no external teacher to oversee the learning process. In other words, there are no specific samples of the function to be learned by the network. Rather, provision is made for a task-independent measure of the quality of representation that the network is required to learn and the free parameters of the network are optimized with respect to that measure. Once the network has become tuned to the statistical regularities of the input data, it develops the ability to form internal representations for encoding features of the input and thereby creates new classes automatically. In unsupervised learning, We are given a training set {x i ; i = 1, 2, ..., m}, of unlabeled vectors in R n . The objective is to categorize or discover features or regularities in the training data. In some cases the x i must be mapped into a lower dimensional set of patterns such that any topological relations existing among the x i are preserved among the new set of patterns. Normally, the success of unsupervised learning hinges on some appropriately designed network which encompasses a task- independent criterion of the quality of representation that the network is required to learn. Here, the weights of the network are to be optimized with respect to this criterion. Our interest here is in training networks of simple units to perform the above tasks. In the remainder of this work, some basic unsupervised learning rules for a single unit and for simple networks are introduced. II. COMPETITIVE LEARNING In this learning those neurons which respond strongly two input stimuli have their weights updated. When an input pattern is presented all neurons in the layer compete and the winning neurons undergo weights adjustment. The main goal of the competitive learning [6],[8] is to transform an input signal pattern of arbitrary dimension into a lower (1, 2 or 3 dimensions) dimensional and to perform this transformation adaptively in a topological orderly fashion. The algorithm starts by randomly initializing the synaptic weights in the network. That is, no prior order is imposed on the network in the initialization operation. After that, there are 3 major activities involved in this method. They are the followings: 1. Competition 2. Adjustment 2.1. Competition For each input pattern, the neurons in the network will compute their respective values of a discriminate function based on the input. This function provides the basis for competition among the neurons. The neuron with the smallest value of the discriminate function (usually in Euclidean space: Euclidean distance) is declared as the winning neuron. To understand which is the distance measurement used in that discriminate function, we will briefly describe it below. Assume that m is the dimension of the input pattern (or input space) and v denotes an input pattern vector., i.e. v = [v 1 , v 2 , , v m ] T . The synaptic weight vector has the same dimension as the input space. Let the synaptic weight vector of neuron i denotes by w i = [w i1 , w i2 , , w im ] T . To find the best match of the input vector v with the synaptic weight vector w i , compare the distance of the input vector with all the synaptic Jitendra Singh Sengar et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1480-1485 IJCTA | SEPT-OCT 2011 Available [email protected] 1480 ISSN:2229-6093

Upload: others

Post on 12-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 2: Review: Competitive Learning Algorithm of Neural Network
Page 5: Review: Competitive Learning Algorithm of Neural Network
Page 6: Review: Competitive Learning Algorithm of Neural Network

IV. CONCLUSION AND FUTURE WORK

Finally, simple, single layer networks of multiple interconnected units are considered in the context of competitive learning, learning vector quantization, principal component analysis, and self-organizing feature maps. Simulations are also included which are designed to illustrate the powerful emerging computational properties of these simple networks and their application. It is demonstrated that local interactions in a competitive net can lead to global order. A case in point is the SOFM where simple incremental interactions among locally neighboring units lead to a global map which preserves the topology and density of the input data. This paper considers the use of learning vector quantization to model aspects of development including of the property of recurrent rules of learning for strengthening of synaptic efficacy of structure in the visual system in early life. Refference [1] Lo,J.T.-H.; “Unsupervised Hebbian learning by recurrent multilayer neural networks for temporal hierarchical pattern recognition” Information Sciences and Systems (CISS), 2010 44th Annual Conference on Digital Object Identifier: 10.1109/CISS.2010.5464925 Publication Year: 2010 , Page(s): 1 – 6. [2] Tung-Shou Chen; Jeanne Chen; Yuan-Hung Kao; Bai-JiunTu; “A Novel Anti-Competitive Learning Neural Network Technique against Mining Knowledge from Databases”Software Engineering, 2009. WCSE '09. WRI World Congress on Volume:4DigitalObjectIdentifier: 10.1109 / WCSE . 2009.345 Publication Year: 2009 , Page(s): 383 – 386. [3]Shou-weiLi; “Analysis of Contrasting Neural Network with Small-World Network” Future Information Technology and Management Engineering, 2008. FITME '08. International Seminar on Digital Object Identifier: 10.1109/FITME.2008.55 Publication Year: 2008 , Page(s): 57 - 60 [4] Esakkirajan, S.; Veerakumar, T.; Navaneethan, P.; “Adaptive vector quantization technique for retinal image compression” Computing , Communication and Networking, 2008. ICCCn 2008. International Conference on Digital ObjectIdentifier: 10.1109 / ICCCNET.2008.4787724 Publication Year: 2008 , Page(s): 1 - 4 . [5]Chaudhuri, A.; De, K.; Chatterjee, D.; “A Study of the Traveling Salesman Problem Using Fuzzy Self

Organizing Map”Industrial and Information Systems, 2008. ICIIS 2008. IEEE Region 10 and the Third international Conference on Digital Object Identifier: 10.1109 / ICIINFS .2008. Publication Year: 2008 , Page(s): 1 - 5. [6]Kamimura,R.; “Controlled Competitive Learning: Extending Competitive Learning to Supervised Learning”Neural Networks, 2007. IJCNN 2007. International Joint Conference on Digital Object Identifier: 10.1109 / IJCNN. 2007 . 4371225 Publication Year: 2007 , Page(s): 1767 – 1773. [7]Daxin Tian; Yanheng Liu; Da Wei; Intelligent Control and Automation, 2006. WCICA 2006. The Sixth World Congress on Volume:1 Digital Object Identifier: 10.1109 / WCICA .2006.1712893 Publication Year: 2006 , Page(s): 2886 - 2890. [8]Sutton, G.G., III; Reggia, J.A.; Maisog, J.M.; “Competitive learning using competitive activation rules” Neural Networks, 1990., 1990 IJCNN International Joint Conference on Digital Object Identifier: 10.1109/IJCNN.1990.137728 Publication Year: 1990 , Page(s): 285 – 291.

Jitendra Singh Sengar et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1480-1485

IJCTA | SEPT-OCT 2011 Available [email protected]

1485

ISSN:2229-6093