comparison of visualization of optimal clustering using self-organizing map and growing hierarchical...
TRANSCRIPT
A
Csc
MQ1
a
b
Kc
K
a
ARRAA
KCQ2OVSGOPG
1
sift
d(
h1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
ARTICLE IN PRESSG ModelSOC 2298 1–16
Applied Soft Computing xxx (2014) xxx–xxx
Contents lists available at ScienceDirect
Applied Soft Computing
j ourna l h o mepage: www.elsev ier .com/ locate /asoc
omparison of visualization of optimal clustering usingelf-organizing map and growing hierarchical self-organizing map inellular manufacturing system
anojit Chattopadhyaya,∗, Pranab K. Danb, Sitanath Mazumdarc
Information Technology Area, Indian Institute of Management Raipur, GEC Campus, Sejbahar, Raipur 492015, IndiaDepartment of Industrial Engineering & Management, School of Engineering, West Bengal University of Technology, BF 142, Sector 1, Salt Lake City,olkata 700064, West Bengal, IndiaFaculty Counsel for PG Studies in Commerce, Social Welfare & Business Management, Calcutta University, Alipore Campus, Reformatory Street,olkata 700027, India
r t i c l e i n f o
rticle history:eceived 26 June 2012eceived in revised form 20 February 2014ccepted 21 April 2014vailable online xxx
eywords:ellular manufacturing systemperation sequenceisual clusteringelf-organizing maprowing hierarchical self-organizing mapptimizationerformance measureroup technology efficiency
a b s t r a c t
The present research deals with the cell formation problem (CFP) of cellular manufacturing system whichis a NP-hard problem thus, the development of optimum machine-part cell formation algorithms hasalways been the primary attraction in the design of cellular manufacturing system. In this proposedwork, the self-organizing map (SOM) approach has been used which is able to project data from a high-dimensional space to a low-dimensional space so it is considered a visualized approach for explaininga complicated CFP data set. However, for a large data set with a high dimensionality, a traditional flatSOM seems difficult to further explain the concepts inside the clusters. We propose one such possiblesolution for a large CFP data set by using the SOM in a hierarchical manner known as growing hierarchicalself-organizing map (GHSOM). In the present work, the two novel contributions using GHSOM are: thechoice of optimum architecture through the minimum pattern units extracted at layer 1 for the respectivethreshold values and selection. Furthermore, the experimental results clearly indicated that the machine-part visual clustering using GHSOM can be successfully applied in identifying a cohesive set of part familythat is processed by a machine group. Computational experience specifically with the proposed GHSOMalgorithm, on a set of 15 CFP problems from the literature, has shown that it performs remarkably well.The GHSOM algorithm obtained solutions that are at least as good as the ones found the literature. For 75%of the cell formation problems, the GHSOM algorithm improved the goodness of cell formation through
GTE performance measure using SOM as well as best one from the literature, in some cases by as much asmore than 12.81% (GTE). Thus, comparing the results of the experiment in this paper with the SOM andGHSOM using the paired t-test it has been revealed that the GHSOM approach performed better than theSOM approach so far the group technology efficiency (GTE) measures of performance of the goodness ofcell formation is concerned.35
36
37
38
. Introduction
The cellular manufacturing system (CMS) is a relativelyelf-contained and self-regulated manufacturing approach that
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
nvolves the grouping of machines and parts such that each partamily is processed by machine cell. The benefits of CMS areo reduce lead time, setup time, material handling and work in
∗ Corresponding author. Tel.: +91 03324535605.E-mail addresses: [email protected] (M. Chattopadhyay),
[email protected] (P.K. Dan), sitanath [email protected]. Mazumdar).
ttp://dx.doi.org/10.1016/j.asoc.2014.04.027568-4946/© 2014 Elsevier B.V. All rights reserved.
39
40
41
42
43
© 2014 Elsevier B.V. All rights reserved.
process, and also to improve quality, machine utilization, simpli-fied scheduling, space utilization, improved human relations, etc.The benefit of productivity improvement and lead time reductionsto MRO operations has also been reported from the application ofCMS [53].
The strategy of CMS is to process a family of parts through adedicated cell, thereby gaining the advantages of a mass produc-tion system. The principal problem in designing a CM system is thecell formation problem (CFP) which is solved by decomposing the
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
system into subsystems which are as autonomous as possible sothat the relations of the machines and parts inside subsystems aremaximized and the interactions among machines and parts of othersubsystems are reduced to the extent that possible.
44
45
46
47
ING ModelA
2 d Soft
faohlC
fbo[
wmbiiumhiPtoasidlwis
•
•
•
(tld
ittetat
(
(
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
ARTICLESOC 2298 1–16
M. Chattopadhyay et al. / Applie
The cell formation problem is an NP-hard problem [1,2], there-ore, the development of optimum machine-part cell formationlgorithms has been always the primary attraction in the designf cellular manufacturing system, and thus wide-ranging researchas been existed in the literature. During the last few decades a
arge number of authors have reviewed extensively of the work onMS [3–5,57,64].
Over the last decade, researchers have presented many cellormation methods to solve CF problem. These methods cane broadly classified as: Mathematical programming meth-ds [6–9,61], Heuristic [10–12] and Metaheuristic algorithms13–20,54–56], artificial intelligence methodologies [21–27].
Unsupervised learning techniques are a subset of neural net-ork fields which enable the identification and grouping ofachine-part clustering patterns without having seen that pattern
efore or having its key characteristics described; to do this a sim-larity measure is defined and the groups are clustered togethernto a lower dimensional space [28]. Self-organizing map (SOM) ofnsupervised neural network are one such technique which enableapping of data with a large feature set into 2D space [29,59]. SOM
as been effectively used to solve CFP of CMS [28,30–34] by apply-ng traditional SOM successfully through visual understanding ofart-Machine Incidence Matrix (PMI) based data structure relatedo various production factors. In the earlier work of [34] the qualityf the trained SOM has been evaluated by the quantization errornd topographic error calculated for various map sizes which hastrong influence on the quality of machine-part cell formation. Its also evident from the literature that there is no definite rule foretermining optimal SOM map size which is also one of the serious
acunas of traditional SOM [34,60]. In the context of the presentork the following limitations of traditional SOM are being taken
nto account while applying to cell formation. The limitations areummarized as follows:
Firstly, the size and arrangement of the maps have to be estab-lished prior to training.Secondly, they are not able to represent hierarchical relationshipsof the CFP data.Thirdly, it is quite difficult to comprehend the clustering of cellformation from a large size output SOM map applied to huge PMImatrices.
Therefore, the growing hierarchical self-organizing mapGHSOM) [35] was proposed to solve the above weaknesses. Thisype of SOM approach has a hierarchical architecture divided intoayers consisting of different SOMs whose size is automaticallyetermined during the unsupervised learning process.
The GHSOM approach was implemented successfully in creat-ng a topology-preserving representation of the topical clusters ofhe machine-part cell formation by the present author for the firstime [36]. But the proposed work will differ from the earlier ones byxperimenting and analyzing the convergence criteria, computa-ion time, quantization error for exploring a suitable GHSOM modelpplied to cell formation problems so that best optimal visual clus-ering of machine-part cell formation will be achieved.
The objectives of the proposed work are:
1) to determine the impact of two parameters (depth and breadth)to obtain the best optimal GHSOM map orientation and topol-ogy preservation.
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
2) to understand of the hierarchical clustering improved by apply-ing different visualization techniques to the maps at each layerso that visual clustering information can be utilized by the man-ager in order to obtained optimum machine-part cell.
PRESS Computing xxx (2014) xxx–xxx
(3) to compare the network architecture and performance (bothcomputational time and quantization error) of both SOM andGHSOM for visual clustering of machine-part cell formation.
(4) to analyze the criteria of choice of visual clustering of machine-part cell formation using traditional SOM and GHSOM that usethe information on the sequence of operations of the differentpart types.
(5) to verify the efficacy of our work, we first conduct an experi-ment using the proposed SOM and GHSOM models. Then applyboth models to selected CFPs from literature to calculate thegoodness of cell formation and compare them with the availablebest results using other approaches from literature.
1.1. Cell formation problem
The present work has been considered one real life productionfactors, such as operation sequence in the problem formulationwhich is an ordering of the machines on which the part is sequen-tially processed [37]. In the machine-part incidence matrix (MPIM)the operation sequence is represented as the value, aij. Insidethe matrix the operational sequence of part j to be processed bymachine i otherwise zero.
Grouping of parts into families and machines into cells results ina transformed matrix with diagonal blocks where sequence of oper-ations (ordinal numbers 1, 2, 3,. . .) occupy the diagonal blocks andzeros, the off-diagonal blocks. The output diagonal blocks representthe machine-part cells. An odd sequence number lying outside thediagonal blocks indicates processing of a part outside its cell, requir-ing inter-cell movement, which is undesirable. Preferably, the allthe operations of the parts should be completed within the cell towhich they belong. The objective of the cell formation problem isthus to rearrange the PMI to minimize the number of exceptionalelements, provided a block-diagonal structure exists. Thus, in gen-eral, best result for a machine-part clustering is desired to satisfythe following two conditions:
(a) To minimize the number of 0s inside the diagonal blocks (i.e.,voids);
(b) To minimize the number of ordinal numbers (sequence) outsidethe diagonal blocks (i.e., exceptional elements).
The remainder of this paper is organized as follows. In Section2, we describe the visual clustering approach of both the proposedself-organizing map and growing hierarchical self-organizing mapalgorithms. The goodness of cell formation through a performancemeasure known as group technology efficiency has also been dis-cussed in this Section 2. Section 3 presents an experiment with anumerical example using both SOM and GHSOM visual clusteringmodels. In Section 4, we examines the computational results andfindings based on structure of visual clustering maps of both theproposed applied models and show how they confirm the efficacyof this research. A detail discussion has been made in the next Sec-tion 5 and finally, we conclude this paper and describe our futurework in Section 6.
2. Methodology
Visual cluster analysis, as the term implies, is a method ofinformation visualization and cluster analysis techniques whichis gaining significant relevance in many industrial and researchareas for visualizing complex and high-dimensional data. It has
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
been shown that visualization allows for verification of the clus-tering results [39]. The SOM maps are often proposed for this tasksince they generate a mapping from the high-dimensional inputspace to the low-dimensional structure used as network topology.
164
165
166
167
IN PRESSG ModelA
d Soft Computing xxx (2014) xxx–xxx 3
TitalelHsOa
iiedtbmS
d
2
ddcmvftvfha
c
Q3sartiiaidni
m
wctSwS[tA
Fig. 1. Insertion of a row between error unit e and most dissimilar unit d.
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
ARTICLESOC 2298 1–16
M. Chattopadhyay et al. / Applie
he issue is not about dimensional reduction ability of SOMs rathert is about the better preservation of topology of SOMs. For a longime this issue was handled mostly by visual inspection. The SOMpproach is able to project data from a high-dimensional space to aow-dimensional space so it is considered a visualized approach forxplaining a complicated data set [40] and explore patterns fromarge and complex datasets for linear and non-linear patterns [58].owever, for a large data set with a high dimensionality, a flat SOM
eems difficult to further explain the concepts inside the clusters.ne possible solution for an extensive data set is using the SOM in
hierarchical manner.In contrast to earlier work [36], the topology of the given data
s taken into account for this measure which we regard as anmportant contribution in this regard. In the present work we willstablish the optimum map structure by experiment using theepth and breadth parameters of GHSOM. In the following we likeo investigate the representation of a rather simple data set realizedy (a) the self-organizing feature map and (b) proposed GHSOMethod. The data comes from a 20 × 25 rectangular area. For the
OM we chose an array of size 20 × 20.A brief introduction of the SOM and GHSOM methods has been
escribed in the following section.
.1. Self-organizing map (SOM) algorithm
The SOM is a nonlinear, ordered, smooth mappings of highimensional input data onto the elements of a regular, low-imensional (usually 2D) array [29,41]. The architecture of SOMonsists of a set of i units arranged in a 2D grid with a weight vector
i attached to each unit, which may be initialized randomly. Inputectors x are presented to the SOM, and the activation of each unitor the presented input vector is calculated using an activation func-ion. Commonly, it is the Euclidian distance between the weightector of the unit and the input vector that serves as the activationunction. In the next step the weight vector of the unit showing theighest activation (i.e., the smallest Euclidian distance) is selecteds the “winner” ck, where
k = arg min||xk − mi||. (1)
The weight vector of the winner is moved toward the pre-ented input signal by a certain fraction of the Euclidean distances indicated by a time-decreasing learning rate ˛. The learningate can be an inverse time, linear, or power function. Thus,his unit’s activation will be even higher the next time the samenput signal is presented. Moreover, the weight vectors of unitsn the neighborhood of the winner are also modified according to
spatial–temporal neighborhood function ε. Similar to the learn-ng rate, the neighborhood function ε is time-decreasing. Also, εecreases spatially away from the winner. There are many types ofeighborhood function, and the typical one is Gaussian. The learn-
ng rule may be expressed as
i(t + 1) = mi(t) + ˛(t) · ε(t) · [x(t) − mi(t)], (2)
here t denotes the current learning iteration and x represents theurrently presented input pattern. This learning procedure leadso a topologically ordered mapping of the presented input data.imilar patterns are mapped onto neighboring regions on the map,hile dissimilar patterns are further apart. One limitation of the
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
OM is that its size needs to be specified before the training process62]. The smaller the size, more general information is obtained;he larger the size, more detailed information can be extracted.dditional discussion on the SOM is given in [34].
268
2.2. Growing hierarchical self-organizing map (GHSOM)algorithm
To overcome the SOM limitations of both static architectureand lack of hierarchically adaptive architectures the growing hier-archical self-organizing map (GHSOM) that dynamically fits itsmulti-layered architecture [42] according to the structure of theCFP data is proposed in this work GHSOM has been used as a toolto perform clustering task with application in many domains ofresearch [43–47]. The GHSOM enhances the capabilities of the basicSOM in two ways. The first is to use an incrementally growing ver-sion of the SOM, which does not require the user to directly specifythe size of the map before hand whereas the second enhance-ment is the ability to adapt to hierarchical structures in the data[48–51,63]. It aims at adapting the net to the data and not viceversa (predetermined grid size of the traditional SOM is considereda limitation). It consists of several layers of rectangular 2D SOMsthat can be arranged and visualized as a quad-tree-like structure.The GHSOM can grow in two different ways during its training. Inthe first way each layer can grow in terms of its prototype units, sothat the original 2 × 2 map size is enlarged by insertion of either arow or a column of new units between existing ones (Fig. 1). Themap stops to grow after a certain point. After this, the units arechecked, and if the samples mapped to one unit are highly different,so that the prototype does not correspond to the samples appro-priately sufficient, another layer of 2 × 2 units is added beneath theunit and training is continued as mentioned above. Both of the twogrowing processes are governed by a parameter. The training andgrowing methods is described in more detail below. The topmostlayer (layer 0) holds only a single node, calculated as the mean ofall input samples. Subsequently, the mean quantization error mqefor this prototype vector is computed, that measures the deviationof the samples, formally,
mqe0 = 1|X|
∑xj ∈ X
||m0 − xj|| (3)
where X is the set of all samples, and m0 is the single model vector oflayer 0. The |X| denotes the cardinality of X (the number of samples).In case of the single unit layer 0, all the sample vectors are mappedto this unit. The value mqe0 will be referred to later; it denoteshow far the data set is spread in input space. Below layer 0, layer1 with initially 4 (2 × 2) units is created and trained according tothe usual SOM learning rule (as described in Section 2.1). After apreviously defined number of steps � of the training process, themean quantization errors for all the units are computed,
mqe = 1 ∑||m − x ||, (4)
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
i |Ci|xj ∈ Ci
i j
ING ModelA
4 d Soft
wFm
M
a
M
hqcteq
e
ntin
aQ4mgp
m
opdlwfiio
gab
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
ARTICLESOC 2298 1–16
M. Chattopadhyay et al. / Applie
ith Ci the subset of the samples for which unit i is the BMU.urthermore, the map’s mean quantization error MQE can be deter-ined, formally written as
QEm = 1|U|
∑i ∈ U
mqei, (5)
s long as
QEm < �1 · mqe (6)
olds, the training of the current map is continued (mqeu is theuantization error of the corresponding unit u in the upper hierar-hy layer). Here, the first parameter �1 comes into play that delimitshe growth of the map’s size. If the stopping criterion is not met, therror unit e is determined, namely the unit with the largest meanuantization error, formally
= arg maxi
(mqei), (7)
Then the most dissimilar unit d is computed, that is, one of up to 4eighbors of e with the largest distance in input space. Betweenhese two units a row or column of units is inserted that arenitialized with an interpolated value (i.e., mean) of the existingeighboring units. Fig. 1 shows this kind of growth process.
After that, the standard SOM training process is continued fornother � step, and when the rule in formula (6) does not hold any-ore, training of the current layer is finished. Then the hierarchical
rowing is applied, if the criterion is met, where �2 is the secondarameter.
qei < �2 · mqe (8)
All units refer to the layer 0 unit’s quantization error regardlessn which layer the current node is located. Note that this growingrocess does not occur always, only if the unit still requires a moreetailed representation. Also, it does not occur evenly across one
ayer, it is for example possible that a node is finished with traininghile its neighboring unit requires one (or even more) layers ofne-tuning. If this is the case, another SOM of initially 2 × 2 nodes
s created on the next layer (see Fig. 2), and trained with the subsetf the samples for which the upper unit is the BMU.
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
Thus, the parameters �1 and �2 define the thresholds for the tworowth processes. Both parameters have to be between 0 and 1. Rel-tively small values of �1 lead to a lengthy growth of each layer andig maps, while large values lead to a shorter training of each map
Fig. 2. Hierarchical growth process of the GHSOM.
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
PRESS Computing xxx (2014) xxx–xxx
and thus to smaller map sizes. If parameter �2 is relatively small,units tend to be expanded on the next layer more easily, while largevalues result in flat hierarchies. Trained GHSOMs can be visualizedin a quad-tree like way, with deeper layers nested in their respec-tive preceding unit. Most of the visualization techniques describedin the previous sections cannot be applied to the GHSOM or aremore difficult to apply, like the U-matrix. However, hit-histogrambased visualization and component planes can be visualized as withtraditional SOMs.
2.3. Performance measure
The performance of cell formation considering the ordinal leveldata (sequence of operation) has been evaluated by group tech-nology efficiency (GTE) [38] which is the ratio of the differencebetween the maximum number of inter-cell travels possible andthe numbers of inter-cell travels actually required by the system tothe maximum number of inter-cell travels possible. The maximumnumbers of inter-cell travels possible in the system:
Ip =noc∑j=1
nopj(j)∑i=1
[n(I, J) − 1], (9)
where noc is the total number of part family or cell;nopj(j) the totalnumber of parts in Jth cell; I is the part number of a cell; and n(I, J)the maximum number of operation required by Ith part on Jth cell.
The number of inter-cell travels required by the system is
Ir =p∑
J=1
(n(J)−1)∑w(J)=1
tn(J)w(J) (10)
where p is the total number of parts in system (i.e., in CFP); J the Jthpart in system (i.e., in CFP); n(J) the maximum number of operationrequired for Jth part
tn(J)w(J)
{= 0 if the operations w(J), w(J) + 1 are performed in the same cell
= 1 otherwise
Hence, the GTE is calculated as
GTE = Ip − IrIp
The GTE is a powerful performance measure. For a more valueof the GTE, the goodness of cell formation will be a better one.
3. Experiment with an illustrative example
To evaluate the GHSOM we applied both the traditional SOMand the GHSOM to the eighteen MPI datasets (two data sets areartificially generated). In both the cases cluster structure of themachine-part cell formation was not a priori known and the inten-tion was to explore the machine-part data and gain new insights.Moreover, in both cases any structure found in the CFP data caneasily be evaluated by either block diagonal form or visualizing 2Dplots.
In general, the first step to verify results obtained from visualclustering approach of GHSOM is to compare the GTE performance
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
measures to the results obtained applying SOM and available bestresults using other approaches from literature as shown in Table 6.This is necessary to ensure supremacy of the approaches overapproaches already adopted in the existing literature.
348
349
350
351
ARTICLE IN PRESSG ModelASOC 2298 1–16
M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx 5
Table 1Example data set of size 20 × 25 generated artificially.
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 M15 M16 M17 M18 M19 M20
P1 1 3 4 2P2 2 4 1 3P3 3 4 1 2 5P4 2 3 5 4 1P5 2 4 5 1 3P6 1 2 3 4P7 3 4 2P8 4 5 1 3P9 1 3 5 2P10 1 3 4 2P11 1 3 4 2P12 3 4 2P13 1 2 3 4P14 3 4 1 2P15 4 5 1 3P16 2 4 5 1 3P17 1 3 2P18 2 3 5 4 1P19 3 4 1 2P20 3 4 1 2 5P21 1 3 5 2P22 3 4 1 2
3s
dw
vib
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
P23 1 3
P24 1 3 4
P25 2 4
.1. Numerical example of a SOM of the 20 × 25 MPI with ordinalequence data
A numerical illustration for our approach comprises a 20 × 25imension MPI matrix based on operation sequence informationhich has been generated artificially (Table 1).
This CFP becomes input to the traditional SOM algorithm. The
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
isual clustering output of SOM for this example problem has beenllustrated using U-matrix, component plane. In order to find theest network topology, we applied quantitative criteria by slightly
Fig. 3. U-matrix and component plane of traditional SOM algo
22
1 3
increasing the number of neurons. For a more detailed discussionof the machine-part clusters found on this map we refer to [28,34].The optimum map size of 20 × 20 is determined after selecting theminimum values of QE(0), TE(0.08) respectively. The computationtime for obtaining the optimum map size 20 × 20 is 101 s. The U-matrix was used to visualize the SOM grid map on the resulting2D feature map shown in top left of Fig. 3. The cluster border has
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
been interpreted by the highest color extraction with high valuesbetween the connections of the neurons. In Fig. 3 the cluster bor-der has been shown in white color line in the U-matrix. There are
rithm applied on example CFP problem of size 20 × 25.
368
369
370
ARTICLE IN PRESSG ModelASOC 2298 1–16
6 M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx
itiona
blctcm
(
rntArbbi
3o
la
the GHSOM map (Figs. 5 and 6) we find unit (1/1) to represent partsub-family as (3, 20), (7, 8, 12, 15), (5, 16), (2, 25, 14, 19, 22) wherethe corresponding two machine groups consisting of the machines
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
Fig. 4. PC projection to visualize machine-part cluster using Trad
roadly 4 machine-part cluster (C1, C2, C3, C4) represented by theight color extracted as minimum values was found. The highestolor extraction in the U-matrix is the frontal border of the clus-ers. By the combined visual cluster analysis of the 20 machineomponents (Fig. 1) and PC projection (Fig. 4) the following 4 fourachine-part cells are formed:
C1: machine group: M16, M6, M17, M8, M10 part family: P5, P14,P22, P16, P25, P2, P19C2: machine group M11, M14, M4, M5, M19 part family: P20, P7,P15, P8, P12, P3C3: machine group M1, M15, M3, M9 part family: P24, P17, P9,P21, P11, P23C4: machine group M2, M20, M7, M12, M18, M1 part family: P1,P10, P18, P4, P13, P6
Based on the above information the block diagonal formTable 2) has been achieved with GTE 100%.
While we find the SOM to provide a good topological orderedepresentation of the various parts processed by group of machines,o information about machine-part hierarchies (sub-cell forma-ion) can be identified from the resulting traditional flat SOM map.part from this we find the size of the map to be quite large withespect to the number of parts family identified. This is mainlyecause the size of the map has been determined in advance,efore any information about the number of machine-part clusters
s available.
.2. Numerical example of a GHSOM of the 20 × 25 MPI withrdinal sequence data
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
Based on the unit representing the mean of all data points atayer 0, the GHSOM training algorithm starts with a 2 × 2 SOMt first layer. The training process for this map continues with
l SOM algorithm applied to example CFP problem of size 20 × 25.
additional units being added until the quantization error (qe) dropsbelow a certain percentage of the overall qe of the unit at layer 0.The first layer map output is shown in Figs. 5 and 6. The result-ing map (Fig. 5) is a 2 × 2 units (four bigger quares) representing 2part families (first and fourth squares) of the example machine-partmatrix. Accordingly the machine groups will be identified based onthe topography analysis of the map. The map has grown furtheradding row and column respectively, resulting in 2 patterns eachin first and fourth bigger square in the next level 2 (Fig. 5). From
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
Fig. 5. The resulting 3-layer GHSOM for example data set of size 20 × 25 to visualizethe cluster topology of part family formation.
ARTICLE IN PRESSG ModelASOC 2298 1–16
M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx 7
Table 2Block diagonal form of example problem 20 × 25 after applying SOM cluster visualization.
M16 M6 M17 M8 M10 M11 M14 M4 M5 M19 M1 M15 M3 M9 M2 M20 M7 M12 M18 M13
P5 1 2 3 4 5P14 1 2 3 4P22 1 2 3 4P16 1 2 3 4 5P25 1 2 3 4P2 1 2 3 4P19 1 2 3 4P20 1 2 3 4 5P7 2 3 4P15 1 3 4 5P8 1 3 4 5P12 2 3 4P3 1 2 3 4 5P24 1 2 3 4P17 1 2 3P9 1 2 3 5P21 1 2 3 5P11 1 2 3 4P23 1 2 3P1 1 2 3 4P10 1 2 3 4P18 1 2 3 4 5
as(oafp3use
naomfi
(
Ft
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
P4
P13
P6
re (4, 5, 11, 14, 19), (6, 8, 10, 16, 17) whereas in the layer 1 of 4thquare the part subfamilies are (17, 23), (4, 18), (9, 11, 21, 24), and1, 6, 10, 13) lower left corner related to machine group consistingf machines (1, 3, 15, 9), (2, 20, 7, 12, 18, 13). Based on this first sep-ration of the most dominant part family in machine-part cluster,urther maps were automatically trained to represent the variousart family in more detail. This results in 2 individual maps on therd layer, each representing the data of the respective higher layernit in more detail as part sub-families (2, 25), (14, 19, 22) ando on. Some of the units on these second layer maps were furtherxpanded as distinct SOMs in the third layer.
The resulting 2nd layer maps are also depicted in Fig. 5. Pleaseote that the maps on 2nd layer have grown to different sizesccording to the structures of the cell formation problem data basedn sequence information. Taking a more detailed look at the 4thap of the 2nd layer representing the 1st unit on the 1st layer we
nd it to give a clearer representation of the part family formation.
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
By the combined visual cluster analysis using the GHSOM mapFigs. 5 and 6) the following 2 machine-part cells are formed:
ig. 6. The resulting 3-layer GHSOM for example data set of size 20 × 25 to visualizehe cluster topology of machine group formation.
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
1 2 3 4 51 2 3 41 2 3 4
C1: machine group: M4, M5, M6, M8, M10, M11, M14, M16, M17,M19 part family: P3, P2, P7, P8, P12, P15, P5, P16, P2, P25, P14, P19,P22C2: machine group: M1, M2, M3, M7, M9, M12, M13, M15, M18,M2 part family: P17, P23, P4, P18, P9, P11, P21, P24, P1, P6, P1, P13
Based on the above observed information the block diagonalform (Table 3) has been achieved with 100% GTE.
4. Computational results and findings
The computational results of the GHSOM were compared withthe ones obtained by using the traditional SOM with respect tothree different perspectives: visualization of the resulting maps,structure of the resulting maps, and training time.
The experimental results have shown that both SOM andGHSOM were successful in creating a topology-preserving repre-sentation of the topical clusters of the machine-part cell formation.However, when dealing with a bigger size machine-part matrix,GHSOM behaved better than SOM in the sense that its architecturewas determined automatically during its learning process basedon the requirement of the input CFP data. In addition, GHSOM wasable to reveal the inherent hierarchical structure of the data intolayers and provided the capability to select the granularity of therepresentation at different levels of the GHSOM.
4.1. Structure of the resulting maps
In this section, structure of the resulting maps of SOM andGHSOM were investigated.
4.1.1. Structure of SOMsTypically, the structure of SOMs is evaluated using two qual-
ity measures: quantization error (qe) and topology error (te), as
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
defined below [52].
(1) Quantization error (qe) is the average distance between eachinput vector and its winning neuron.
460
461
462
ARTICLE IN PRESSG ModelASOC 2298 1–16
8 M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx
Table 3Block diagonal form of example problem 20 × 25 after applying GHSOM cluster visualization.
M4 M5 M6 M8 M10 M11 M14 M16 M17 M19 M1 M2 M3 M7 M9 M12 M13 M15 M18 M20
P3 3 4 1 2 5P20 3 4 1 2 5P7 3 4 2P8 4 5 1 3P12 3 4 2P15 4 5 1 3P5 2 4 5 1 3P16 2 4 5 1 3P2 2 4 1 3P25 2 4 1 3P14 3 4 1 2P19 3 4 1 2P22 3 4 1 2P17 1 3 2P23 1 3 2P4 2 3 5 4 1P18 2 3 5 4 1P9 1 3 5 2P11 1 3 4 2P21 1 3 5 2P24 1 3 4 2P10 1 3 4 2P6 1 2 3 4
(
(s
eSca
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
P1
P13
2) Topographic error (te) is the percentage of input vectors forwhich the first and second winning neurons are not adjacentunits.
Table 5 provides the results of quality (qe) of fixed size SOMsfor a more detailed discussion on selection of optimum SOM mapize we refer to [34]).
From the results of qe of the present data sets, Chattopadhyay
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
t al. [34] has found that for every data set, there is an optimumOM map size and also established that there is no fixed rule forhoosing optimum SOM map size except the map for which therere minimum (optimum) of qe and te values.
Fig. 7. The resulting number of clusters in each 4-layer GHSOM for the 18 experim
1 3 4 21 2 3 4
4.1.2. Structure of GHSOMsIn the application of the GHSOM Toolbox, all the parameters are
set to the default values except �1 and �2, the breadth- and depth-controlling parameters. The structure of GHSOMs were studied interms of the number of layers and the map size at Layer 1 con-structed by varying �1 (for controlling breadth of the maps) andby varying �2 (for controlling depth of GHSOM), as reported inTable 4. Generally, when smaller (�1 and �2) values are chosen thereare more nodes, that is, larger SOM arrays, in the output. A largeSOM array identifies a large number of patterns and reveals more
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
detailed structure within the data, whereas a small SOM array iden-tifies fewer, more generalized patterns. Therefore, different valuesare used to test the GHSOM performance (see Table 4). According
ental problems of different sizes achieved for different values of tau(�1, �2).
484
485
486
Please cite this article in press as: M. Chattopadhyay, et al., Comparison of visualization of optimal clustering using self-organizing map and growing hierarchical self-organizing map in cellular manufacturing system, Appl. Soft Comput. J. (2014),http://dx.doi.org/10.1016/j.asoc.2014.04.027
ARTICLE IN PRESSG ModelASOC 2298 1–16
M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx 9
Table 4Data sets applied with GHSOM algorithm using different tau values with numbers of cluster units achieved at different layers and execution time.Q6
Data Size tau1,tau2 Layer 1 Layer 2 Layer 3 Layer 4 cpu
1 Artificially generated 2 50 × 43 0.2,0.02 25 5 0 0 0.6250.4,0.04 11 5 0 0 0.7656250.6,0.06 5 5 730.8,0.08 2 2 2 2 0.3906251,0 2 2 2 2 1.765625
2 Sudhakara Pandian andMahapatra [65]
60 × 28 0.2,0.02 40 0 0 0 0.671875
0.4,0.04 35 0 0 0 0.43750.6,0.06 22 2 0 0 0.3906250.8,0.08 12 3 0 0 0.3906251,0 4 4 2 1.125
3 Sudhakara Pandian andMahapatra [65]
37 × 20 0.2,0.02 29 0 0 0 0.265625
0.4,0.04 24 0 0 0 0.2343750.6,0.06 14 2 0 0 0.8593750.8,0.08 6 4 0 0 1.3281251,0 4 4 2 0 0.484375
4 Sudhakara Pandian andMahapatra [65]
55 × 20 0.2,0.02 45 0 0 0 0.5625
0.4,0.04 33 0 0 0 0.3593750.6,0.06 23 2 0 0 0.3750.8,0.08 5 5 0 0 0.81251,0 4 4 2 1 1.140625
5 Sudhakara Pandian andMahapatra [65]
50 × 25 0.2,0.02 39 0 0 0 0.421875
0.4,0.04 29 0 0 0 0.3906250.6,0.06 17 3 0 0 1.1406250.8,0.08 4 4 0 0 0.4843751,0 4 4 2 0 0.625
6 Sofianopoulou (1999) 20 × 12 0.2,0.02 16 0 0 0 0.18750.4,0.04 14 0 0 0 0.1250.6,0.06 6 1 0 0 0.1718750.8,0.08 4 1 0 0 0.1406251,0 4 3 0 0.265625
7 Nagi et al. (1990) 20 × 20 0.2,0.02 14 0 0 0 0.18750.4,0.04 10 0 0 0 0.7968750.6,0.06 6 1 0 0 0.18750.8,0.08 4 2 0 0 0.2343751,0 3 2 1 0 0.34375
8 Sudhakara Pandian andMahapatra [65]
30 × 15 0.2,0.02 23 0 0 0 0.25
0.4,0.04 16 0 0 0 0.0781250.6,0.06 10 1 0 0 0.1406250.8,0.08 5 4 0 0 0.3751,0 4 2 2 0 0.265625
9 Nair and Narendran(1998)
7 × 7 0.2,0.02 5 0 0 0 0.109375
0.4,0.04 4 0 0 0 0.2656250.6,0.06 4 0 0 0 0.1093750.8,0.08 2 0 0 0 0.06251,0 2 0 0 0 0.046875
10 Nair and Narendran(1998)
20 × 8 0.2,0.02 13 1 0 0 0.109375
0.4,0.04 6 3 0 0 0.656250.6,0.06 3 3 0 0 0.2656250.8,0.08 3 3 0 0 0.281251,0 3 3 1 0 0.265625
11 Nair and Narendran(1998)
20 × 20 0.2,0.02 18 0 0 0 0.203125
0.4,0.04 12 0 0 0 0.3906250.6,0.06 8 0 0 0 0.0781250.8,0.08 4 2 0 0 0.1718751,0 4 1 0 0 0.125
12 Nair and Narendran(1999)
12 × 10 0.2,0.02 9 0 0 0 0.109375
0.4,0.04 6 0 0 0 0.0781250.6,0.06 5 0 0 0 0.093750.8,0.08 4 0 0 0 0.06251,0 4 0 0 0 0.046875
ARTICLE IN PRESSG ModelASOC 2298 1–16
10 M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx
Table 4 (Continued)
Data Size tau1,tau2 Layer 1 Layer 2 Layer 3 Layer 4 cpu
13 Sofianopoulou (1999) 5 × 4 0.2,0.02 4 0 0 0 0.1093750.4,0.04 3 0 0 0 0.06250.6,0.06 3 0 0 0 0.093750.8,0.08 3 0 0 0 0.093751,0 3 0 0 0 0.078125
14 Won and Lee (2001) 5 × 5 0.2,0.02 4 0 0 0 0.0781250.4,0.04 4 0 0 0 0.2968750.6,0.06 3 0 0 0 0.06250.8,0.08 3 0 0 0 0.2343751,0 3 0 0 0 0.0625
15 Sudhakara Pandian andMahapatra [65]
7 × 5 0.2,0.02 4 0 0 0 0.140625
0.4,0.04 3 0 0 0 0.06250.6,0.06 2 0 0 0 0.093750.8,0.08 2 0 0 0 0.2343751,0 2 0 0 0 0.0625
16 Sudhakara Pandian andMahapatra [65]
8 × 6 0.2,0.02 7 0 0 0 0.125
0.4,0.04 5 0 0 0 0.1093750.6,0.06 4 0 0 0 0.06250.8,0.08 3 0 0 0 0.0781251,0 3 0 0 0 0.078125
17 Park and Suresh (2003) 19 × 12 0.2,0.02 12 0 0 0 0.156250.4,0.04 7 0 0 0 0.1406250.6,0.06 5 1 0 0 0.1718750.8,0.08 4 3 0 0 0.343751,0 2 2 0 0 0.265625
18 Artificially generated2 20 × 25 0.2,0.02 9 0 0 0 0.1406250.4,0.04 6 3 0 0 0.375
5
3
2
t0i�aw
�vivb
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
0.6,0.06
0.8,0.08
1,0
o Table 4, by varying the threshold �1 by 0.20 starting from 1.00 to.20 and correspondingly changing the threshold �2 by 0.02 start-
ng from 0.00 to 0.02. The results show that setting the threshold1 to 1 would lead to a large number of layers with only 2 × 2 mapst Layer 1 and setting it to 0 would lead to a small number of layersith a huge map at Layer 1.
According to Table 4, the results show that setting the threshold2 to 1 would lead to no hierarchy and setting it to 0 would lead to
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
ery deep branches, while the map size at layer 1 is stable. The mostnteresting observation in Table 4 is that for each set of thresholdalues (�1, �2) in layer 1 in each 18 experimental problem the num-er of units (patterns) extracted is minimum for values �1 = 1 and
Fig. 8. The resulting quantization error (qe) using GHSOM for the 17 experimen
3 0 0 0.3281253 0 0 0.4531252 0 0 0.515625
�2 = 0. Thus, we chose the case of (�1 = 1, �2 = 0) to analyze simplybecause the results have maximum four layers and the SOM arraysare large enough to represent part family features (also machinegroups) and small enough to be visualized which is also shown inFig. 7. For example, in the example problem data set the thresholdvalues for �1 = 0.2 and �2 = 0.02 maximum pattern achieved was 9whereas for values �1 = 1 and �2 = 0 the extracted cluster was onlytwo at layer 1. The computation time for the GHSOM training was
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
0.515625 s. The resulting quantization error (qe) using GHSOM forthe 17 experimental problems of different sizes achieved for differ-ent values of tau(�1, �2) are shown in Fig. 8 which reveals that theqe is minimum only in the tau values (�1 = 1, �2 = 0). Table 4 also
tal problems of different sizes achieved for different values of tau(�1, �2).
507
508
509
510
ARTICLE IN PRESSG ModelASOC 2298 1–16
M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx 11
Table 5Comparison of QE and CPU achieved for different problem data sets after applying GHSOM and SOM algorithm to obtain best cell formation.
Sr. no. Matrix size cpu ghsom QE GHSOM cpu som SOM map size QE SOM
1 5 × 4 0.078125 2.8 1.609375 5 × 4 0.0082 5 × 5 0.078125 4.86 1.640625 5 × 5 0.1193 7 × 5 0.09375 9.61 2.828125 7 × 5 0.0024 8 × 6 0.078125 9.3 3.28125 8 × 6 0.0445 7 × 7 0.078125 8.9 2.90625 7 × 7 0.0246 12 × 10 0.09375 28.53 7.203125 12 × 10 0.0127 20 × 8 0.296875 8.16 11.125 20 × 8 0.0018 19 × 12 0.234375 9.71 20.8125 19 × 12 0.0019 20 × 12 0.15625 28.41 10.75 20 × 12 0.034
10 20 × 20 0.34375 3.16 47.70313 20 × 20 011 20 × 20 0.296875 8.81 48.2 20 × 20 0.001
sl
4
erac
4
GGstMtFeatovm
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
12 30 × 15 0.40625 25.5
13 37 × 20 0.578125 8.99
14 55 × 20 0.734375 20.11
hows the sub-clustering achieved in deeper layers, i.e., layer 2 toayer 4 for the respective problem data.
.2. Training time
The training time of SOMs and GHSOMs required to execute forach PMI data set were analyzed where the experiments were car-ied out in a virtual control environment, where only the MATLABpplication with SOM Toolbox and GHSOM Toolbox run on theomputer system.
.2.1. Training time of SOMsTable 5 reports time spent on training recommended size SOMs.
raphs in Fig. 9 show the training time of traditional SOMs andHSOMs respectively for 14 experimental data sets. The resultshow that for every data set, when the size of PMI matrix increases,ime required for training SOMs and GHSOM both trend to increase.
oreover, training time for GHSOMs is exceptionally small becausehe SOM Toolbox needs some times to determine the map size.ig. 10 shows the computation time (cpu) using GHSOM for the 16xperimental problems of different sizes achieved by varying �1nd �2. The results show that for every data set, time required for
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
raining GHSOMs trends to increase when the value of the thresh-ld �1 and �2 are 1,0 respectively. The reason is that decreasing thealue of the threshold �2 makes the GHSOM a very deep branchesap.
Fig. 9. Comparison of computation time (cpu) of tradition
89.3 30 × 15 0120.3438 37 × 20 0.002142.125 55 × 20 0.022
4.3. Comparison of cell formation solutions using SOM andGHSOM models with the benchmarked solution from literature
When comparing the GHSOM with a SOM we can identify thelocations of the parts on the two second-layer maps on a cor-responding 20 × 25 SOM. This allows us to view the hierarchicalstructure of the data on the traditional flat map. We find that, forexample, the cluster on C1 and C2 of the U-matrix on the traditionalSOM map forms one larger coherent cluster in the left and upperleft corner of the GHSOM map covering the rectangle spanned bythe 1st unit. The same applies to the cluster of C3 and C4 of theU-matrix on traditional SOM, which is represented by the map ofunit 4 in the growing hierarchical self-organizing map. This clusteris mainly located in the right and the lower right corner of the SOM.The cluster of 1st unit represented by unit (1/4) on the 2nd layer ofthe GHSOM and explained in more detail on its subsequent layers.Note that this universal clustering is not easily discernible from theoverall map representation in the SOM, where exactly this hierar-chical information is missing. The subdivision of this machine-partcluster on cell formation problem matrix becomes further obviouswhen we consider the second layer clustering of this cell formation,where the variety of part sub-family are clearly separated, coveringcorresponding machine groups.
Another interesting hierarchical structure not evident from the
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
SOM is represented by the 1st and 4th unit of layer 1 on GHSOMmap. When identifying the cluster areas of U-matrix on the tra-ditional SOM that are represented by [C1, C2] and [C3, C4] in theGHSOM, we find that it covers two clustering areas [unit 1 and unit
al SOMs and GHSOMs for 14 experimental data sets.
557
558
559
560
ARTICLE IN PRESSG ModelASOC 2298 1–16
12 M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx
erime
4uCtitdbtcaspftGt
TCQ7G
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
Fig. 10. The resulting computation time (cpu) using GHSOM for the 16 exp
in layer 1]. This is, on the one hand, a group of four units in thepper left corner of the SOM representing the C1 and C2, whereas3 and C4 are in the second clustering area in the lower right ofhe GHSOM map. The connection involving these two sub-clusterss missing in the large SOM. This may be because of the size ofhe SOM, where the overall organization of the map needs to beetermined during the very first training steps when the neigh-orhood range of the learning function still covers a large area ofhe SOM. A similar situation can be identified for several smallerlusters, which are scattered across different areas on the SOM, butdequately combined in the first layer of the growing hierarchicalelf-organizing map and further analyzed and separated as inde-endent sub-clusters on subsequent layers. Yet another interestingeature of the GHSOM we want to emphasize is the overall reduc-
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
ion in map size. During analysis we found the second layer of theHSOM to represent the data at about the same level of cell forma-
ion detail as the corresponding SOM. However, the number of units
able 6omparative GTE measures between SOM and GHSOM approach using the GTE measuresHSOM model.
Sr. no References Size ofproblem
SOM mapsize
GTE (best resuliterature usinGA, K-means, Salgorithms)
1 Chattopadhyay et al. [33] 8 × 6 7 × 7 –
2 Nair and Narendran(1998)
7 × 7 7 × 7 100
3 Nair and Narendran(1998)
20 × 8 20 × 8 58.54
4 Nair and Narendran(1998)
20 × 20 9 × 9 64.3
5 Nair and Narendran(1999)
12 × 10 10 × 10 84.61
6 Sofianopoulou (1999) 5 × 4 12 × 12 69.25
7 Won and Lee (2001) 5 × 5 15 × 15 84
8 Sudhakara Pandian andMahapatra [65]
7 × 5 18 × 18 58.54
9 Sudhakara Pandian andMahapatra [65]
8 × 6 19 × 19 83.93
10 Park and Suresh (2003) 19 × 12 21 × 21 78
11 Sofianopoulou (1999) 20 × 12 24 × 20 74.58
12 Nagi et al. (1990) 20 × 20 9 × 9 64.3
13 Sudhakara Pandian andMahapatra [65]
30 × 15 25 × 25 76.71
14 Sudhakara Pandian andMahapatra [65]
37 × 20 32 × 32 71.59
15 Sudhakara Pandian andMahapatra [65]
50 × 25 36 × 36 69.13
16 Sudhakara Pandian andMahapatra [65]
55 × 20 38 × 38 81.2
ntal problems of different sizes achieved for different values of tau(�1, �2).
of all individual second-layer maps combined is only 11 as opposedto 150 units in the 20 × 20 SOM. With the GHSOM model, this num-ber of units is determined automatically, and only the necessarynumber of units is created for each level of detail representationrequired by the respective layer. Furthermore, not all branches aregrown to the same depth of the hierarchy. As can be seen fromFigs. 5 and 7 only some of the units are further expanded in a third-layer map. With the resulting maps at all layers of the hierarchybeing rather small, activation calculation and winner evaluation ofthe GHSOM is by orders of magnitude faster than in the SOM model.Apart from the speed-up gained by the reduced network size, ori-entation for the user is highly improved as compared to the ratherhuge maps which cannot be easily comprehended as a whole.
To demonstrate the performance of the proposed two visual
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
clustering algorithms, we tested the SOM and GHSOM algorithmson 15 benchmarked CFPs collected from literature. The matrix sizesand their sources are presented in Table 6. We compare the GTE
of benchmarked solutions from literature and % increase performance of GTE using
lt fromg ART1,VD
GTE (using SOMapproach)
GTE (usingproposed GHSOMapproach)
% increase inperformance of GTEusing GHSOM modelthan SOM model frombenchmarked results
81.25 81.25 0.00100 100 0.00
85.71 87.82 2.46
94 95.91 2.03
84.61 100 18.19
92.31 100 8.3384.61 85.71 1.3087.82 87.82 0.00
87.5 92.3 5.49
83.93 87.5 4.2581.82 92.3 12.8194 95.91 2.0378.89 93.97 19.12
82 83.78 2.17
83.64 83.64 0.00
81.2 83.64 3.00
592
593
594
ARTICLE IN PRESSG ModelASOC 2298 1–16
M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx 13
Table 7Paired t-test of GTE measure between SOM and GHSOM approach.
N Mean St. dev. SE mean
GTE result using SOM approach 16 86.4556 5.8280 1.4570GTE result using GHSOM approach 16 90.7219 6.4196 1.6049Difference 16 −4.26625 5.16318 1.29079
9t
ofKtaotpc[
aaatpbcfr
mcAoGAhop
puphup
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
5 CI for mean difference: (−7.01751, −1.51499).-Test of mean difference = 0 (versus not = 0): t-value = −3.31, p-value = 0.005.
btained by our two algorithms with the best GTE obtained by theollowing 4 approaches: ART1 [65], genetic algorithm (GA) [67],-means clustering [66], SVD [66] clustering methods from litera-ure and the results are presented in Table 6. The above mentionedpproaches have been claimed to provide the best GTE results asbserved in the existing literature. Based on these observationshe present study has attempted to compare the efficacy of theroposed SOM and GHSOM methodologies with the best resultslaimed by other authors in their methodologies as available in Refs.65–67].
Fig. 13 shows a line diagram of the benchmarked results. It ispparent in this figure that GTE using approaches available in liter-ture are inferior except in one problem where all three GTE resultsre equal and in another problem GTE using SOM approach is equal;herefore, we compare only the performance between the two pro-osed algorithms. Thus, it is obvious from Fig. 13 that the upper thinlack line represented by GHSOM is performing better goodness ofell formation than the other two approaches: SOM and approachesound in literature represented by pink dotted and thick blue lineespectively.
Table 6 shows % increase in performance of GTE using GHSOModel than SOM model from benchmarked results with the per-
entage improvement of the GTE obtained using GHSOM algorithm.s can be seen in Table 6, the algorithms proposed in this paperbtained operation sequence based cell formation, which have aTE that is never smaller than any of the best reported results usingRT1, GA, K-means and SVD algorithms. It is found that GHSOMas improved the GTE measure using the best reported method-logies from literature by more than 40% in case of 5 (31.25%) CFProblems.
More specifically, the GHSOM algorithm obtains for 4 (25%)roblems values of the GTE that are equal to the best ones foundsing the SOM and improves the values of the GTE for 12 (75%)
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
roblems. In 5 (31.25%) problems, the percentage improvement isigher than 5%. For 3 (18.75%) problems, the solution was obtainedsing GHSOM, showing the good quality (greater than 12%) andower of the visual clustering using GHSOM than SOM algorithm.
Fig. 11. (a) Probability plot of GTE measure obtained from SOM approach. (
Therefore, a statistical investigation has only been performed tocompare between the two proposed approaches and to establishthe superiority of GHSOM approach over the SOM approach so farthe GTE measures of performance for cell formation is concernedand shown in Tables 5 and 7 and Fig. 11a and b. A paired t-test andconfidence interval (CI) of 95% for the mean difference betweenGTE as obtained from SOM and GHSOM approaches are performed(from the result of Table 6). The confidence interval for the meandifference between the two approaches does not include ‘zero’,indicating a difference between them. The small p-value (p = 0.005)further suggests that the data is not consistent with H0: d = 0, that isthe two approaches do not execute similarly. In particular, GHSOMapproach (mean = 90.72) resulted better than the SOM approach(mean = 86.45) in respect of GTE measure for the sixteen problems(Table 6).
Both GTE measures obtained after applying SOM and GHSOMapproaches are normally distributed (Anderson–Darling normal-ity test at significance level = 0.05; p = 0.077 for GTE using SOMapproach and 0.226 for GTE using GHSOM approach).
The normal probabilities plots versus the data are shown in thegraphical output of Fig. 11a and b. The data depart from the fittedline most evidently in the extremes, or distribution tails. While inany t-test the assumption of normality is of only moderate impor-tance, thus, even though the data looks like departing from thefitted line in the lower extreme, still the Anderson Darling (AD)tests’ p-value indicates that it is safe applying the paired t-test.
5. Discussion
As a visual cluster analysis method, the sequence based (ordi-nal data) cell formation conveniently orders patterns of variabilityon the basis of variance. However, as a linear method, it may be asuboptimal way of spanning a data space if the system is nonlin-
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
ear. The nonlinear SOM orders patterns of variability on the basisof topology rather than the variance. A major strength of the SOMis that the underlying patterns in a MPI dataset can be visualizedin the same form as the original data. Thus, if input data are MPI
b) Probability plot of GTE measure obtained from GHSOM approach.
663
664
665
666
ARTICLE IN PRESSG ModelASOC 2298 1–16
14 M. Chattopadhyay et al. / Applied Soft Computing xxx (2014) xxx–xxx
dition
dpmi1GitiSsthaStfaGHtpiil
FQ5plct
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
Fig. 12. Comparison of quantization error (qe) of tra
ata based on ordinal sequence data, then the outputs are similaratterns of parts to be processed by machine group, not differentachine-part patterns. As the SOM output patterns resemble the
nput format, their qualitative interpretation may be easier. Fig.2 shows that the quantization error is much lower in SOM thanHSOM. It is also evident from Fig. 12 that individually the qe is
ncreasing for the large size problems in SOM. On the other handhe GHSOM resulted with less qe when the problem size is increas-ng. The major advantages of the GHSOM model over the standardOM are the following. First, the overall training time is reducedince only a necessary number of units are developed to organizehe data at a certain level of detail. Second, the GHSOM uncovers theierarchical structure of the data, allowing the user to understandnd analyze a large amount of data in an exploratory way. EachOM array in the hierarchy explains a particular set of characteris-ics of the data. This makes the GHSOM analysis an excellent toolor feature extraction and classification. Third, the size of the SOMrray does not have to be specified subjectively before hand; theHSOM automatically expands in a multi-dimensional structure.ere we used the GHSOM algorithm to extract characteristic pat-
erns of machine-part cell formation. Four characteristic part familyatterns are extracted in the first-layer GHSOM array: character-
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
stic machine group patterns. Two of these are further expandedn a second layer, yielding more machine-part cluster pattern evo-ution details. Note, that this machine-part cell formation is not
ig. 13. Comparison of goodness of cell formation (using GTE) measures of the tworoposed approaches using SOM and GHSOM with that of approaches available from
iterature such as ART1, GA, K-means, SVD algorithms applied on 15 benchmarkedell formation problems. (For interpretation of the references to color in the text,he reader is referred to the web version of this article.)
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
al SOMs and GHSOMs for 14 experimental data sets.
easily discernible from the overall map representation in the SOMswhere exactly this hierarchical information is lost. The subdivisionof the output machine-part cell becomes further evident when weconsider layered classification of the cell where the various sub-cell formation are clearly separated as shown in Fig. 5. With theresulting maps at all layers of the hierarchy being rather small,activation calculation and winner evaluation of the GHSOM is byorder of magnitude faster than in the SOM model. If the SOM mapsize is a large enough applied to a large CFP problem dataset thenthe relationship between two subcluster of Fig. 5 is lost becausethe overall organization of the map needs to be determined duringthe very first training steps when the neighborhood range of thelearning function still covers a large area of SOM. The same can behappened for several smaller clusters which are scattered acrossdifferent areas on the SOM, but nicely combined in the first layer ofthe GHSOM map and further analyzed and separated as indepen-dent sub-cluster (cell) on subsequent layers. Apart from the speedup gained by the reduced network size, orientation for the useris highly improved as compared to the rather huge maps whichcannot be easily comprehended as a whole.
6. Conclusions and future work directions
The analysis of both the structure of traditional SOM map andGHSOM map have established one significant fact that in visualclustering of machine-part cell formation using traditional SOMis most efficient for smaller unstructured PMI data set whereasGHSOM found to produce better result in large size unstructuredPMI data set. The choice of optimum GHSOM map architecturethrough the minimum pattern units extracted at layer 1 for therespective threshold values �1 and �2 selection are the two novelcontributions of the present work which may have better implica-tion in other areas of cluster analysis as well.
According to the experimental results, we found that the result-ing maps of GHSOM, serving as retrieval interfaces, can helpindustry managers to obtain better insight into the hierarchicalstructure of cell formation, and increase their understanding of thesemantic relationships among machine-part clustering in an explo-rative way. By means of the GHSOM output maps at each layer,engineers and managers can keep an overview of each cluster andfind easily the needed cell more easily and quickly, and make bet-
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
ter decisions in selecting the best possible machine-part clustering,i.e., best “cell”, for their needs. Last but not least the GHSOM ismore promising than the traditional SOM by ensuring a consistentglobal orientation of the individual maps in the respective layers,
731
732
733
734
ING ModelA
d Soft
toaTebi
tfpipalcgubruamc
ocrcwo
A
Sq
R
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
ARTICLESOC 2298 1–16
M. Chattopadhyay et al. / Applie
he topological similarities of neighborhood maps are preservedwing to its adaptive architecture and the ability to expose the hier-rchical structure of sequence based cell formation problem data.hus navigation across map boundaries is facilitated, allowing thexploration of similar clusters that are represented by the neigh-oring branches in the GHSOM structure. The overall training time
s exceptionally reduced than the same from SOM training.Furthermore, the experimental results clearly indicated that
he machine-part visual clustering using GHSOM can be success-ully applied in identifying a cohesive set of part family that isrocessed by a machine group. Computational experience specif-
cally with the proposed GHSOM algorithm, on a set of 15 CFProblems from the literature, has shown that it performs remark-bly well. The GHSOM algorithm obtained solutions that are ateast as good as the ones found the literature. For 75% of theell formation problems, the GHSOM algorithm improved theoodness of cell formation through GTE performance measuresing SOM as well as best one from the literature, in some casesy as much as more than 12.81% (GTE). Thus, comparing theesults of the experiment in this paper with the SOM and GHSOMsing the paired t-test it has been revealed that the GHSOMpproach performed better than the SOM approach so far the GTEeasures of performance of the machine-part cell formation is
oncerned.The visual clustering approach of growing hierarchical self-
rganizing map algorithm as a solution of cell formation probleman be implemented with the latest popular meta-heuristic algo-ithms. We believe the next decade will let signal to this visuallustering in several other computational intelligence domainshich are considered to be a forefront research area in present era
f soft computing.
cknowledgements
Sincere thanks are due to the anonymous reviewers and Dr.urajit Chattopadhyay for constructive suggestions to enhance theuality of the manuscript.
eferences
[1] C. Dimopoulos, A.M.S. Zalzala, Recent developments in evolutionary computa-tions for manufacturing optimization: problems, solutions, and comparisons,IEEE Trans. Evol. Comput. 4 (2000) 93–113.
[2] A. Ballakur, H.J. Steudel, A within-cell utilization based heuristic for designingcellular manufacturing system, Int. J. Prod. Res. 25 (1998) 639–655.
[3] N. Singh, Design of cellular manufacturing systems: an invited review, Eur. J.Oper. Res. 69 (1993) 284–291.
[4] M.S. Selim, R.G. Askin, A.J. Vakharia, Cell formation in group technology: reviewevaluation and directions for future research, Comput. Ind. Eng. 34 (1) (1998)3–20.
[5] G. Papaioannou, J.M. Wilson, The evolution of cell formation problem method-ologies based on recent studies (1997–2008): review and directions for futureresearch, Eur. J. Oper. Res. 206 (3) (2010) 509–521.
[6] A. Kusiak, The generalized group technology concept, Int. J. Prod. Res. 25 (1987)561–569.
[7] F.F. Boctor, A linear formulation of the machine-part cell formation problem,Int. J. Prod. Res. 29 (2) (1991) 343–356.
[8] Y. Won, K.C. Lee, Modified p-median approach for efficient GT cell formation,Comput. Ind. Eng. 46 (2004) 495–510.
[9] M. Diaby, A.L. Nsakanda, Large scale capacitated part-routing in the presenceof process and routing flexibilities and setup costs, J. Oper. Res. Soc. 57 (2006)1100–1112.
10] A.M. Mukattash, M.B. Adil, K.K. Tahboub, Heuristic approaches for part assign-ment in cell formation, Comput. Ind. Eng. 42 (2002) 329–341.
11] W.M. Chan, C.Y. Chan, W.H. Ip, A heuristic algorithm for machine assignmentin cellular layout, Comput. Ind. Eng. 44 (2003) 49–73.
12] C.O. Kim, J.G. Baek, J.K. Baek, A two-phase heuristic algorithm for cell formation
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
problems considering alternative part routes and machine sequences, Int. J.Prod. Res. 42 (18) (2004) 3911–3927.
13] V. Venugopal, T.T. Narendran, A genetic algorithm approach to the machinecomponent grouping problem with multiple objectives, Comput. Ind. Eng. 22(4) (1992) 469–480.
[
PRESS Computing xxx (2014) xxx–xxx 15
14] R. Logendran, Y. Karim, Design of manufacturing cells in the presence of alter-native cell locations and material transporters, J. Oper. Res. Soc. 54 (2003)1059–1075.
15] T.H. Wu, C. Low, W.T. Wu, A tabu search approach to the cell formation problem,Int. J. Adv. Manuf. Technol. 23 (2004) 916–924.
16] D. Lei, Z. Wu, Tabu search for multiple-criteria manufacturing cell design, Int.J. Adv. Manuf. Technol. 28 (2006) 950–956.
17] X. Wu, C. Chao-Hsien, Y. Wang, W. Yan, A genetic algorithm for cellular manu-facturing design and layout, Eur. J. Oper. Res. 181 (2007) 156–167.
18] K. Spiliopoulos, S. Sofianopoulou, An efficient ant colony optimization systemfor the manufacturing cells formation system, Int. J. Adv. Manuf. Technol. 36(2008) 589–597.
19] F.M. Defersha, M. Chen, A linear programming embedded genetic algorithm foran integrated cell formation and lot sizing considering product quality, Eur. J.Oper. Res. 187 (2008) 46–69.
20] A.M. Bajestani, M. Rabbani, A.R. Rahimi-Vahed, G.B. Khoshkhou, A multiobjec-tive scatter search for a dynamic cell formation problem, Comput. Oper. Res.36 (2009) 777–794.
21] S. Zolfaghari, M. Liang, An objective-guided ortho-synapse Hopfield approachto machine grouping problems, Int. J. Prod. Res. 5 (10) (1997) 2773–2792.
22] M. Soleymanpour, P. Vrat, R. Shankar, A transiently chaotic neural networkapproach to the design of cellular manufacturing, Int. J. Prod. Res. 40 (10) (2002)2225–2244.
23] P. Venkumar, A.N. Haq, Manufacturing cell formation using modified ART1networks, Int. J. Adv. Manuf. Technol. 26 (2005) 909–916.
24] M. Saidi-Mehrabad, N. Safaei, A new model of dynamic cell formationby a neural approach, Int. J. Adv. Manuf. Technol. 33 (2007) 1001–1009.
25] M.S. Yang, J.H. Yang, Machine-part cell formation in group technology using amodified ART1 method, Eur. J. Oper. Res. 188 (2008) 140–152.
26] N. Safaei, M. Saidi-Mehrabad, R. Tavakkoli-Moghaddam, F. Sassani, A fuzzy pro-gramming approach for a cell formation problem with dynamic and uncertainconditions, Fuzzy Sets Syst. 159 (2) (2008) 215–236.
27] G. Papaioannou, J.M. Wilson, Fuzzy extensions to integer programming mod-els of cell-formation problems in machine scheduling, Ann. Oper. Res. 166 (1)(2009) 163–181.
28] M. Chattopadhyay, S. Chattopadhyay, P.K. Dan, Machine-part cell formationthrough visual decipherable clustering of self-organizing map, Int. J. Adv.Manuf. Technol. 52 (9–12) (2010) 1019–1030.
29] T. Kohonen, Self-organized formation of topologically correct feature maps,Biol. Cybern. 43 (1982) 59–69.
30] V. Venugopal, T.T. Narendran, Machine-cell formation through neural networkmodels, Int. J. Prod. Res. 32 (9) (1994) 2105–2116.
31] F. Guerrero, S. Lozano, K.A. Smith, D. Canca, T. Kwok, Manufacturing cell forma-tion using a new self-organizing neural network, Comput. Ind. Eng. 42 (2002)377–382.
32] P. Venkumar, A.N. Haq, Complete and fractional cell formation using Kohonenself-organizing map networks in cellular manufacturing system, Int. J. Prod.Res. 20 (15) (2006) 4257–4271.
33] M. Chattopadhyay, P.K. Dan, S. Mazumdar, Principal component analysisand self-organizing map for visual clustering of machine-part cell forma-tion in cellular manufacturing system, Syst. Res. Forum (SRF) 5 (1) (2011)25–51.
34] M. Chattopadhyay, P.K. Dan, S. Mazumdar, Application of visual clustering prop-erties of self organizing map in machine-part cell formation, Appl. Soft Comput.12 (2) (2012) 600–610.
35] A. Rauber, D. Merkl, M. Dittenbach, The growing hierarchical self-organizingmap: exploratory analysis of high-dimensional data, IEEE Trans. Neural Netw.13 (2002) 1331–1341.
36] M. Chattopadhyay, N. Das, P.K. Dan, S. Mazumdar, Growing hierarchical self-organizing map computation approach for clustering in cellular manufacturing,J. Chin. Inst. Ind. Eng. 29 (3) (2012) 181–192.
37] A.J. Vakharia, U. Wemmerlov, Designing a cellular manufacturing system: amaterials flow approach based on operation sequences, IIE Trans. 22 (1) (1990)84–97.
38] G. Harhalakis, R. Nagi, J.M. Proth, An efficient heuristic in manufacturing cellformation for group technology applications, Int. J. Prod. Res. 28 (1) (1990)185–198.
39] M. Halkidi, Y. Batistakis, M. Vazirgiannis, Cluster validity methods: part I, SIG-MOD Rec. 31 (2) (2002) 40–45.
40] T. Kohonen, Self-organization and Associative Memory, Heidelberg, Springer,1984.
41] T. Kohonen, Self Organising Maps, 3rd ed., Springer-Verlag, Berlin Heidelberg,2001.
42] M. Dittenbach, D. Merkl, A. Rauber, The growing hierarchical self-organizingmap, Proc. Int. Joint Conf. Neural Netw. (IJCNN) 6 (2000) 15–19.
43] A. Soriano-Asensi, J.D. Martín-Guerrero, E. Soria-Olivas, A. Palomares, R.Magdalena-Benedito, A.J. Serrano-López, Web mining based on growing hier-archical self-organizing maps: analysis of a real citizen web portal, Expert Syst.Appl. 34 (4) (2008) 2988–2994.
44] J.-Y. Shih, Y.-J. Chang, W.-H. Chen, Using GHSOM to construct legal maps
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
for Taiwan’s securities and futures markets, Expert Syst. Appl. 34 (2) (2008)850–858.
45] R.J. Kuo, C.F. Wang, Z.Y. Chen, Integration of growing self-organizing map andcontinuous genetic algorithm for grading lithium-ion battery cells, Appl. SoftComput. 12 (8) (2012) 2012–2022.
884
885
886
887
888
ING ModelA
1 d Soft
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
ARTICLESOC 2298 1–16
6 M. Chattopadhyay et al. / Applie
46] E.J. Palomo, J. North, D. Elizondo, R.M. Luque, T. Watson, Application of grow-ing hierarchical SOM for visualisation of network forensics traffic data, NeuralNetw. 32 (2012) 275–284.
47] S.-Y. Huang, R.-H. Tsaih, W.-Y. Lin, Unsupervised neural networks approach forunderstanding fraudulent financial reporting, Ind. Manage. Data Syst. 112 (2)(2012) 224–244.
48] M. Dittenbach, A. Rauber, D. Merkl, Uncovering hierarchical structure in datausing the growing hierarchical self-organizing map, Neurocomputing 48 (1–4)(2002) 199–216.
49] A. Rauber, D. Merkl, M. Dittenbach, The growing hierarchical self-organizingmap: exploratory analysis of high-dimensional data, IEEE Trans. Neural Netw.13 (2002) 1331–1441.
50] M. Dittenbach, The growing hierarchical self-organizing map: uncover-ing hierarchical structure in data, J. Aust. Soc. Artif. Intell. 22 (3) (2003)25–28.
51] E. Pampalk, G. Widmer, A. Chan, A new approach to hierarchical clustering andstructuring of data with self-organizing maps, Intell. Data Anal. J. 8 (2) (2004)131–149.
52] J. Vesanto, J. Himberg, E. Alhoniemi, J. Parhankangas, SOM Toolbox for Mat-lab 5, Helsinki University of Technology, Finland, 2000 (available online athttp://www.cis.hut.fi/projects/somtoolbox/).
53] P. McLaughlin, I. Durazo-Cardenas, Cellular manufacturing applications in MROoperations, Procedia CIRP 11 (2013) 254–259.
54] M. Mohammadi, K. Forghani, A novel approach for considering layoutproblem in cellular manufacturing systems with alternative processingroutings and subcontracting approach, Appl. Math. Model. (2014),
Please cite this article in press as: M. Chattopadhyay, et al., Corganizing map and growing hierarchical self-organizing map in
http://dx.doi.org/10.1016/j.asoc.2014.04.027
http://dx.doi.org/10.1016/j.apm.2013.11.058.55] J.R. Zeidi, N. Javadian, R. Tavakkoli-Moghaddam, F. Jolai, A hybrid multi-
objective approach based on the genetic algorithm and neural network todesign an incremental cellular manufacturing system, Comput. Ind. Eng. 66(4) (2013) 1004–1014.
[
[
PRESS Computing xxx (2014) xxx–xxx
56] C.R. Shiyas, P.V. Madhusudanan, Cellular manufacturing system design usinggrouping efficacy-based genetic algorithm, Int. J. Prod. Res. (2014) 1–14,http://dx.doi.org/10.1080/00207543.2013.871390.
57] P.K. Arora, A. Haleem, M.K. Singh, Recent development of cellular manufactur-ing systems, Sadhana 38 (3) (2013) 421–428.
58] M.M. Mostafa, Clustering the ecological footprint of nations using Kohonen’sself-organizing maps, Expert Syst. Appl. 37 (4) (2010) 2747–2755.
59] C. Hajjar, H. Hamdan, Interval data clustering using self-organizing maps basedon adaptive Mahalanobis distances, Neural Netw. 46 (2013) 124–132.
60] W.L. Chang, K.M. Tay, C.P. Lim, A new evolving tree for text document cluster-ing and visualization, in: V. Snásel, P. Krömer, M. Köppen, G. Schaefer (Eds.),Soft Computing in Industrial Applications, Springer International Publishing,Switzerland, 2014, pp. 141–151.
61] T. Ghosh, M. Chattopadhyay, P.K. Dan, Hybrid principal component analysistechnique to machine-part grouping problem in cellular manufacturing sys-tem, Int. J. Adv. Oper. Manage. 5 (3) (2013) 237–260.
62] I. Valova, G. Georgiev, N. Gueorguieva, J. Olson, Initialization issues in self-organizing maps, Procedia Comput. Sci. 20 (2013) 52–57.
63] D. Ippoliti, X. Zhou, A-GHSOM. An adaptive growing hierarchical self organizingmap for network anomaly detection, J. Parallel Distrib. Comput. 72 (12) (2012)1576–1590.
64] M. Chattopadhyay, S. Sengupta, T. Ghosh, P.K. Dan, S. Mazumdar, Neuro-geneticimpact on cell formation methods of cellular manufacturing system design: aquantitative review and analysis, Comput. Ind. Eng. 64 (1) (2013) 256–272.
65] R. Sudhakara Pandian, S.S. Mahapatra, Manufacturing cell formation withproduction data using neural networks, Comput. Ind. Eng. 56 (4) (2009)
omparison of visualization of optimal clustering using self-cellular manufacturing system, Appl. Soft Comput. J. (2014),
1340–1347.66] V.M. Sharma, An Evaluation of Dimensionality Reduction on Cell Formation
Efficacy (Doctoral dissertation), Ohio University, 2007.67] J.F. Gonc alves, M.G. Resende, An evolutionary algorithm for manufacturing cell
formation, Comput. Ind. Eng. 47 (2) (2004) 247–273.
945
946
947
948
949