research article an incremental radial basis function...

7
Research Article An Incremental Radial Basis Function Network Based on Information Granules and Its Application Myung-Won Lee and Keun-Chang Kwak Department of Control and Instrumentation Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759, Republic of Korea Correspondence should be addressed to Keun-Chang Kwak; [email protected] Received 29 June 2016; Accepted 22 August 2016 Academic Editor: Toshihisa Tanaka Copyright © 2016 M.-W. Lee and K.-C. Kwak. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper is concerned with the design of an Incremental Radial Basis Function Network (IRBFN) by combining Linear Regression (LR) and local RBFN for the prediction of heating load and cooling load in residential buildings. Here the proposed IRBFN is designed by building a collection of information granules through Context-based Fuzzy C-Means (CFCM) clustering algorithm that is guided by the distribution of error of the linear part of the LR model. Aſter adopting a construct of a LR as global model, refine it through local RBFN that captures remaining and more localized nonlinearities of the system to be considered. e experiments are performed on the estimation of energy performance of 768 diverse residential buildings. e experimental results revealed that the proposed IRBFN showed good performance in comparison to LR, the standard RBFN, RBFN with information granules, and Linguistic Model (LM). 1. Introduction During the past few decades, we have witnessed a rapid growth in the number and variety of applications of fuzzy logic, neural networks, and evolutionary computing as a framework of computational intelligence [1–4]. We especially shall concentrate on incremental construction of Radial Basis Function Network (RBFN) with the aid of information granules. In general, we design with the simplest linear models and then refine such linear models by incorporat- ing additional nonlinear model in system modeling. e commonly used method becomes Linear Regression (LR) model [5]. If LR model appears to be insufficient to predict, further refinements are implemented. is concept is a strong factor motivating the development of the incremental models. e effectiveness and superiority of this model have been demonstrated in the previous work. e incremental model introduced by Pedrycz and Kwak [6] represented a nonlinear and complex characteristic more effectively than conventional models. ere are several advantages of this approach. First, a commonly used framework of LR has been used. e nonlinear behavior of the system could be confined to some limited regions of the input space and by adding only a few patches in these regions becomes practically relevant and conceptually justifiable. Furthermore, it has established a comprehensive design platform offering a complete set- by-step procedure of the construction of the incremental model [7]. e clustering technique used in the design of incremental model is based on Context-based Fuzzy C- Means (CFCM) clustering algorithm [8]. is clustering algo- rithm generates information granules in the form of fuzzy sets and estimate clusters by preserving the homogeneity of the clustered data points associated with the input and output variables. In contrast to the context-free clustering methods [9–12], context-based fuzzy clustering is performed with the use of the contexts produced in output space. e effectiveness of the CFCM clustering has been successfully demonstrated in the previous works [13–16]. In this paper, we design the variant model with the fun- damental idea of incremental model. For this purpose, we design an Incremental Radial Basis Function Network (IRBFN) by incorporating LR and local RBFN for accurate quantitative prediction of energy performance of residential buildings. We adopt a design of a LR as global model and Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2016, Article ID 3207627, 6 pages http://dx.doi.org/10.1155/2016/3207627

Upload: trinhtruc

Post on 07-Jun-2019

221 views

Category:

Documents


0 download

TRANSCRIPT

Research ArticleAn Incremental Radial Basis Function Network Based onInformation Granules and Its Application

Myung-Won Lee and Keun-Chang Kwak

Department of Control and Instrumentation Engineering Chosun University 375 Seosuk-dong Dong-guGwangju 501-759 Republic of Korea

Correspondence should be addressed to Keun-Chang Kwak kwakchosunackr

Received 29 June 2016 Accepted 22 August 2016

Academic Editor Toshihisa Tanaka

Copyright copy 2016 M-W Lee and K-C Kwak This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

This paper is concernedwith the design of an Incremental Radial Basis FunctionNetwork (IRBFN) by combining Linear Regression(LR) and local RBFN for the prediction of heating load and cooling load in residential buildings Here the proposed IRBFN isdesigned by building a collection of information granules through Context-based Fuzzy C-Means (CFCM) clustering algorithmthat is guided by the distribution of error of the linear part of the LRmodel After adopting a construct of a LR as globalmodel refineit through local RBFN that captures remaining and more localized nonlinearities of the system to be considered The experimentsare performed on the estimation of energy performance of 768 diverse residential buildingsThe experimental results revealed thatthe proposed IRBFN showed good performance in comparison to LR the standard RBFN RBFN with information granules andLinguistic Model (LM)

1 Introduction

During the past few decades we have witnessed a rapidgrowth in the number and variety of applications of fuzzylogic neural networks and evolutionary computing as aframework of computational intelligence [1ndash4]We especiallyshall concentrate on incremental construction of RadialBasis Function Network (RBFN) with the aid of informationgranules In general we design with the simplest linearmodels and then refine such linear models by incorporat-ing additional nonlinear model in system modeling Thecommonly used method becomes Linear Regression (LR)model [5] If LR model appears to be insufficient to predictfurther refinements are implemented This concept is astrong factor motivating the development of the incrementalmodels The effectiveness and superiority of this model havebeen demonstrated in the previous work The incrementalmodel introduced by Pedrycz and Kwak [6] represented anonlinear and complex characteristic more effectively thanconventional models There are several advantages of thisapproach First a commonly used framework of LR has beenusedThe nonlinear behavior of the system could be confined

to some limited regions of the input space and by adding onlya few patches in these regions becomes practically relevantand conceptually justifiable Furthermore it has establisheda comprehensive design platform offering a complete set-by-step procedure of the construction of the incrementalmodel [7] The clustering technique used in the design ofincremental model is based on Context-based Fuzzy C-Means (CFCM) clustering algorithm [8]This clustering algo-rithm generates information granules in the form of fuzzysets and estimate clusters by preserving the homogeneityof the clustered data points associated with the input andoutput variables In contrast to the context-free clusteringmethods [9ndash12] context-based fuzzy clustering is performedwith the use of the contexts produced in output space Theeffectiveness of the CFCM clustering has been successfullydemonstrated in the previous works [13ndash16]

In this paper we design the variant model with the fun-damental idea of incremental model For this purpose wedesign an Incremental Radial Basis Function Network(IRBFN) by incorporating LR and local RBFN for accuratequantitative prediction of energy performance of residentialbuildings We adopt a design of a LR as global model and

Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2016 Article ID 3207627 6 pageshttpdxdoiorg10115520163207627

2 Computational Intelligence and Neuroscience

refine it through local RBFN that captures remaining andmore localized nonlinearities of the system with the aid ofinformation granulation Here the learning methods of localRBFN are performed by LSE and Back-Propagation (BP)This research on the topic of energy performance of buildingshas been recently raising concerns about energy waste Thecomputation of the heating and cooling load to performthe efficient building design is required to determine thespecifications of the heating and cooling equipment neededto maintain comfortable indoor air conditions [17 18] Theexperiments are performed on the estimation of energyperformance of 768 diverse residential buildings

This paper is organized in the following fashion InSection 2 the procedure steps of CFCM clustering methodsare described The entire design concept and process ofIRBFN are proposed in Section 3 The experimental resultsare performed and discussed in Section 4 Concluding com-ments are covered in Section 5

2 Context-Based Fuzzy C-Means Clustering

TheCFCMclustering introduced by Pedrycz [8] estimates thecluster centers preserving homogeneity with the use of fuzzygranulation First the contexts are produced from the outputvariable used in the modeling problem Next the cluster cen-ters are estimated by FCM clustering from input data pointsincluded in each context By forming fuzzy clusters in inputand output spaces CFCM clustering converts numerical datainto semantically meaningful information granules

In what follows we briefly describe the essence of CFCMclustering [8] In a batch-mode operation this clusteringdetermines the cluster centers and themembershipmatrix bythe following steps

Step 1 Select the number of contexts 119901 and cluster center 119888 ineach context respectively

Step 2 Produce the contexts in output space These contextswere generated through a series of triangular membershipfunctions equally spaced along the domain of an output vari-able However wemay encounter a data scarcity problem dueto small data included in some linguistic context Thus thisproblembrings about the difficulty to obtain clusters from theCFCMclusteringTherefore we use probabilistic distributionof output variable to produce the flexible linguistic contexts[13]

Step 3 Once the contexts have been formed the clustering isdirected by the provided fuzzy set of each context

Step 4 Initialize the membership matrix with random valuesbetween 0 and 1

Step 5 Compute 119888 fuzzy cluster centers using (1)Here fuzzification factor is generally used as fixed value119898 = 2

k119894=sum119873

119896=1119906119898

119894119896x119896

sum119873

119896=1119906119898119894119896

(1)

Local RBFN

Linear regression

Local RBFN

Figure 1 Combination of LR and local RBFN in the design ofincremental RBFN (o data points)

Step 6 Compute the objective function according to

119869 =

119888

sum

119894=1

119873

sum

119896=1

119906119898

1198941198961198892

119894119896 (2)

where 119889119894119896

is the Euclidean distance between 1198941015840th clustercenter and 1198961015840th data point The minimization of objectivefunction is obtained by iteratively updating the values of themembership matrix and cluster centers Stop if it is below acertain tolerance value

Step 7 Compute a new membership matrix using (3) Go toStep 5

119906119905119894119896=

119908119905119896

sum119888

119895=1(1003817100381710038171003817x119896 minus k1198941003817100381710038171003817 10038171003817100381710038171003817x119896minus k119895

10038171003817100381710038171003817)2(119898minus1) (3)

where 119906119905119894119896

represents the element of the membership matrixinduced by the 119894th cluster and 119896th data in the 119905th context119908

119905119896

denotes a membership value of the 119896th data point includedby the 119905th context

3 Incremental Radial Basis FunctionNetworks (IRBFN)

In this Section we focus on two essential phases of theproposed IRBFN (Incremental RBFN) as underlying princi-ple First we design a standard LR which could be treatedas a preliminary construct capturing the linear part of thedata Next the local RBFN is designed to eliminate errorsproduced by the regression part of the model Figure 1 showsthe example of nonlinear relationships and their modelingthrough a combination of LRmodel of a global character anda collection of local RBFN As shown in Figure 1 the LinearRegression exhibits a good match except for two local areasThese remaining regions are predicted by local RBFN withthe use of information granules through CFCM clusteringalgorithm Figure 2 shows the architecture and overall flowof processing realized in the design of the proposed IRBFN

The principle of the IRBFN is explained in the followingsteps

Step 1 Design of Linear Regression (LR) model in the input-output space 119911 = L(x b) with b denoting a vector of the

Computational Intelligence and Neuroscience 3

Linear regressionz

Y

x1

x2

Ew

1

w2

w3

w4

c1

c2

c3

c4

R1

R2

R3

R4

osum

sum

RBFN based onCFCM clustering

Figure 2 Main design process of incremental RBFN

regression hyperplane of the linear model b = [a 1198860]119879 thus

we obtain the predicted output by using LR as a global model[6] on the basis of the original data set a collection of input-error pairs is formed (x

119896 119890119896)

Step 2 Construction of the collection of contexts in the errorof the regressionmodel (119864

1 1198642 119864

119901) here 119901 is the number

of contexts the distribution of these fuzzy sets is obtainedthrough statistical method [7 8] mentioned in Section 2the contexts are characterized by triangular membershipfunctions with a 05 overlap between neighboring fuzzy sets

Step 3 CFCMclustering completed in the input-output spacefrom the contexts produced in the error space the obtainedcluster centers are used as the centers of receptive fields in thedesign of local RBFN as shown in Figure 1 for 119901 contexts and119888 clusters for each context the number of nodes in hiddenlayer is 119888 times 119901

Step 4 Calculation of output in the design of local RBFN thefinal output of RBFN is the weighted sum of the output valueassociated with each receptive field as follows

119900 =

119888times119901

sum

119894=1

119908119894119888119894 (4)

The receptive field functions are fixed and then the weightsof the output layer are directly estimated by LSE (Least SquareEstimate) and BP (Back-Propagation) These methods areknown as the most representative techniques frequently usedin conjunctionwith RBFN [2] In order to learn and adapt thearchitecture of RBFN to cope with changing environmentswe need BP learning if we use the steepest descent methodto tune the centers of radial basis function and the outputweights in the design of RBFN Otherwise we can directlyobtain the output weights as one-pass estimation usingLSE

Table 1 Optimal values of the fuzzification coefficient for selectednumber of contexts (p) and clusters (c)

p c3 4 5 6

3 26 21 22 234 22 21 22 25 19 19 19 26 17 18 18 2

Step 5 Calculation of final output of the proposed IRBFNthe granular result of the IRBFN is combined with the outputof the linear part

119884 = 119911 + 119864 (5)

In order to evaluate the overall performance we use standardroot mean square error (RMSE) defined as follows

RMSE = radic 1119873

119873

sum

119896=1

(119884119896minus 119910119896)2 (6)

4 Experimental Results

In the experiments we report on the design and performanceof the proposedmodels to assess the heating load and coolingload requirements of building as a function of buildingparameters All experiments were completed in the 10-foldcross-validation mode with a typical 60ndash40 split betweenthe training and testing data subsets We perform energyanalysis using 12 different building shapes simulated inEcotect [17 18]

These 12 building forms were generated by taking theelementary cube (35 times 35 times 35) where each building form iscomposed of 18 elementsThematerials used for each elemen-tary cube are the same for all building forms The selectionwas made by the newest and most common materials in thebuilding construction industry and by the lowest 119880-value[17] The buildings differ with respect to the glazing areathe glazing area distribution orientation overall height roofarea wall area surface area and relative compactness Thedata set comprises 768 samples and 8 features The attributesto be predicted in terms of the preceding 8 input attributes aretwo real valued responses (heating load and cooling load)

We obtained the experimental results with the twoessential parameters (119901 119888) controlling the granularity of theconstruct in the input and output spaceThe numerical rangeof the fuzzification factor (119898) used in the experiments isbetween 15 and 30 with the incremental step of 01 Table 1listed the optimal values of the fuzzification factor by theincrease of the number of contexts and clusters Figure 3shows the variation of the RMSE caused by the fuzzificationfactor in the case of 119901 = 119888 = 6 for heating load predictionHere the optimal values of the parameters are such that thetesting error becomes minimal

In the conventional method [14] the contexts wereproduced through triangular membership functions equally

4 Computational Intelligence and Neuroscience

1 15 2 25 3 35 421

22

23

24

25

26

27

28

29

3

31

Fuzzification factor (m)

RMSE

RMSE (training)RMSE (testing)

Figure 3 RMSE variation by the values of the fuzzification coeffi-cient (119901 = 119888 = 6)

minus10 minus8 minus6 minus4 minus2 0 2 4 6 8 100

20

40

60

80

100

120

140

160

Error

Num

ber o

f cou

nts

Figure 4 Histogram in the error space

spaced along the domain of an output variable However wemay encounter a data scarcity problem due to small amountsof data included in some contextThus we use a probabilisticdistribution of the output variable to obtain flexible contextsFor this the contexts in the error space are producedbased on a histogram shown in Figure 4 Probability DensityFunction (PDF) and Conditional Density Function (CDF)[13] Figure 5 shows the contexts (119901 = 6) generated inthe error space of LR model as one example among 10-fold cross-validation mode Figure 6 shows the predictionperformance based on local RBFN for the error of LR Asshown in Figure 6 the result clearly shows that the localRBFN with the use of information granulation has goodprediction capability Figures 7 and 8 show the performance

minus6 minus4 minus2 0 2 4 6 80

02

04

06

08

1

Error of LR

Deg

ree o

f mem

bers

hip

Figure 5 Contexts generated in the error space

0 50 100 150 200 250 300minus20

minus15

minus10

minus5

0

5

10

15

20

Number of testing data

Erro

r

Error of LRPredicted error of local RBFN

Figure 6 Prediction by local RBFN for the error of LR

0 50 100 150 200 250 3005

10

15

20

25

30

35

40

45

Number of testing data

Hea

ting

load

Desired outputIRBFN output

Figure 7 Prediction performance for heating load

Computational Intelligence and Neuroscience 5

0 50 100 150 200 250 30010

15

20

25

30

35

40

45

50

Number of testing data

Coo

ling

load

Desired outputIRBFN output

Figure 8 Prediction performance for cooling load

Table 2 Comparison results of RMSE for the prediction of heatingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 2936 2911MLP 36 2883 2890RBFN 36 3707 5199RBFN (CFCM) [14] 36 2767 3106LM 36lowast 4084 4388Proposed incrementalmethod

IRBFN LSE 36 2284 2826IRBFN BP 36 2353 2730

Table 3 Comparison results of RMSE for the prediction of coolingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 3180 3208MLP 36 3176 3226RBFN 36 3601 4812RBFN (CFCM) [14] 36 2866 3388LM 36lowast 3866 4296Proposed incrementalmethod

IRBFN LSE 36 2462 3102IRBFN BP 36 2555 3089

of IRBFN based on LSE for the prediction of heating loadand cooling load respectively Here the number of epochs is1000 and learning rate is 001 respectively As shown in thesefigures the proposed IRBFN showed good generalizationcapability for testing data set respectively Tables 2 and 3listed the comparison results of RMSE for the prediction

of heating and cooling load respectively As listed in thesetables the experimental results revealed that the proposedIRBFN showed good performance in comparison to LRMLP(Multilayer Perceptron) the conventional RBFN RBFN withCFCM clustering and LM (Linguistic Model)

5 Conclusions

We developed the incremental RBFN by combining LRand local RBFN for the prediction of heating load andcooling load of residential buildings It was found from theresult that the proposed IRBFN has good approximationand generalization capabilities with the aid of informationgranulation These results lead us to the conclusion that theproposed IRBFN combined by LR and local RBFN showeda good performance in comparison to the previous worksFor further research we shall design this model to optimizethe number of contexts and clusters per context based onevolutionary algorithm

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was supported by research funds from ChosunUniversity 2012

References

[1] D Simon Evolutionary Optimization Algorithms JohnWiley ampSons 2013

[2] J S R Jang C T Sun and E Mizutani Neuro-Fuzzy andSoft Computing A Computational Approach to Learning andMachine Intelligence Prentice Hall New York NY USA 1997

[3] S Sumathi and S Panneerselvam Computational IntelligenceParadigmsTheory andApplicationsUsingMATLAB CRCPressNew York NY USA 2010

[4] T P Trappenberg Fundamentals of Computational Neuro-science OxfordUniversity Press Oxford UK 2nd edition 2010

[5] G A F Seber Linear Regression Analysis Wiley Series in Prob-ability and Mathematical Statistics John Wiley amp Sons NewYork NY USA 1977

[6] W Pedrycz and K-C Kwak ldquoThe development of incrementalmodelsrdquo IEEE Transactions on Fuzzy Systems vol 15 no 3 pp507ndash518 2007

[7] W Pedrycz and F Gomide Fuzzy Systems Engineering TowardHuman-Centric Computing Wiley-Interscience 2007

[8] W Pedrycz ldquoConditional fuzzy C-meansrdquo Pattern RecognitionLetters vol 17 no 6 pp 625ndash631 1996

[9] L Hu and K C Chan ldquoFuzzy clustering in a complex networkbased on content relevance and link structuresrdquo IEEE Transac-tions on Fuzzy Systems vol 24 no 2 pp 456ndash470 2016

[10] P Fazendeiro and J V De Oliveira ldquoObserver-biased fuzzyclusteringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 1pp 85ndash97 2015

[11] T C Glenn A Zare and P D Gader ldquoBayesian fuzzy clus-teringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 5 pp1545ndash1561 2015

6 Computational Intelligence and Neuroscience

[12] A Proietti L Liparulo and M Panella ldquo2D hierarchical fuzzyclustering using kernel-basedmembership functionsrdquo Electron-ics Letters vol 52 no 3 pp 193ndash195 2016

[13] S-S Kim and K-C Kwak ldquoDevelopment of quantum-basedadaptive neuro-fuzzy networksrdquo IEEE Transactions on SystemsMan and Cybernetics Part B Cybernetics vol 40 no 1 pp 91ndash100 2010

[14] W Pedrycz and A V Vasilakos ldquoLinguistic models and lin-guistic modelingrdquo IEEE Transactions on Systems Man andCybernetics Part B Cybernetics vol 29 no 6 pp 745ndash757 1999

[15] K-C Kwak ldquoA design of genetically optimized linguistic mod-elsrdquo IEICE Transactions on Information and Systems vol E95-Dno 12 pp 3117ndash3120 2012

[16] W Pedrycz ldquoConditional fuzzy clustering in the design of radialbasis function neural networksrdquo IEEE Transactions on NeuralNetworks vol 9 no 4 pp 601ndash612 1998

[17] L Perez-Lombard J Ortiz and C Pout ldquoA review on buildingsenergy consumption informationrdquo Energy and Buildings vol40 no 3 pp 394ndash398 2008

[18] A Tsanas and A Xifara ldquoAccurate quantitative estimation ofenergy performance of residential buildings using statisticalmachine learning toolsrdquo Energy and Buildings vol 49 pp 560ndash567 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

2 Computational Intelligence and Neuroscience

refine it through local RBFN that captures remaining andmore localized nonlinearities of the system with the aid ofinformation granulation Here the learning methods of localRBFN are performed by LSE and Back-Propagation (BP)This research on the topic of energy performance of buildingshas been recently raising concerns about energy waste Thecomputation of the heating and cooling load to performthe efficient building design is required to determine thespecifications of the heating and cooling equipment neededto maintain comfortable indoor air conditions [17 18] Theexperiments are performed on the estimation of energyperformance of 768 diverse residential buildings

This paper is organized in the following fashion InSection 2 the procedure steps of CFCM clustering methodsare described The entire design concept and process ofIRBFN are proposed in Section 3 The experimental resultsare performed and discussed in Section 4 Concluding com-ments are covered in Section 5

2 Context-Based Fuzzy C-Means Clustering

TheCFCMclustering introduced by Pedrycz [8] estimates thecluster centers preserving homogeneity with the use of fuzzygranulation First the contexts are produced from the outputvariable used in the modeling problem Next the cluster cen-ters are estimated by FCM clustering from input data pointsincluded in each context By forming fuzzy clusters in inputand output spaces CFCM clustering converts numerical datainto semantically meaningful information granules

In what follows we briefly describe the essence of CFCMclustering [8] In a batch-mode operation this clusteringdetermines the cluster centers and themembershipmatrix bythe following steps

Step 1 Select the number of contexts 119901 and cluster center 119888 ineach context respectively

Step 2 Produce the contexts in output space These contextswere generated through a series of triangular membershipfunctions equally spaced along the domain of an output vari-able However wemay encounter a data scarcity problem dueto small data included in some linguistic context Thus thisproblembrings about the difficulty to obtain clusters from theCFCMclusteringTherefore we use probabilistic distributionof output variable to produce the flexible linguistic contexts[13]

Step 3 Once the contexts have been formed the clustering isdirected by the provided fuzzy set of each context

Step 4 Initialize the membership matrix with random valuesbetween 0 and 1

Step 5 Compute 119888 fuzzy cluster centers using (1)Here fuzzification factor is generally used as fixed value119898 = 2

k119894=sum119873

119896=1119906119898

119894119896x119896

sum119873

119896=1119906119898119894119896

(1)

Local RBFN

Linear regression

Local RBFN

Figure 1 Combination of LR and local RBFN in the design ofincremental RBFN (o data points)

Step 6 Compute the objective function according to

119869 =

119888

sum

119894=1

119873

sum

119896=1

119906119898

1198941198961198892

119894119896 (2)

where 119889119894119896

is the Euclidean distance between 1198941015840th clustercenter and 1198961015840th data point The minimization of objectivefunction is obtained by iteratively updating the values of themembership matrix and cluster centers Stop if it is below acertain tolerance value

Step 7 Compute a new membership matrix using (3) Go toStep 5

119906119905119894119896=

119908119905119896

sum119888

119895=1(1003817100381710038171003817x119896 minus k1198941003817100381710038171003817 10038171003817100381710038171003817x119896minus k119895

10038171003817100381710038171003817)2(119898minus1) (3)

where 119906119905119894119896

represents the element of the membership matrixinduced by the 119894th cluster and 119896th data in the 119905th context119908

119905119896

denotes a membership value of the 119896th data point includedby the 119905th context

3 Incremental Radial Basis FunctionNetworks (IRBFN)

In this Section we focus on two essential phases of theproposed IRBFN (Incremental RBFN) as underlying princi-ple First we design a standard LR which could be treatedas a preliminary construct capturing the linear part of thedata Next the local RBFN is designed to eliminate errorsproduced by the regression part of the model Figure 1 showsthe example of nonlinear relationships and their modelingthrough a combination of LRmodel of a global character anda collection of local RBFN As shown in Figure 1 the LinearRegression exhibits a good match except for two local areasThese remaining regions are predicted by local RBFN withthe use of information granules through CFCM clusteringalgorithm Figure 2 shows the architecture and overall flowof processing realized in the design of the proposed IRBFN

The principle of the IRBFN is explained in the followingsteps

Step 1 Design of Linear Regression (LR) model in the input-output space 119911 = L(x b) with b denoting a vector of the

Computational Intelligence and Neuroscience 3

Linear regressionz

Y

x1

x2

Ew

1

w2

w3

w4

c1

c2

c3

c4

R1

R2

R3

R4

osum

sum

RBFN based onCFCM clustering

Figure 2 Main design process of incremental RBFN

regression hyperplane of the linear model b = [a 1198860]119879 thus

we obtain the predicted output by using LR as a global model[6] on the basis of the original data set a collection of input-error pairs is formed (x

119896 119890119896)

Step 2 Construction of the collection of contexts in the errorof the regressionmodel (119864

1 1198642 119864

119901) here 119901 is the number

of contexts the distribution of these fuzzy sets is obtainedthrough statistical method [7 8] mentioned in Section 2the contexts are characterized by triangular membershipfunctions with a 05 overlap between neighboring fuzzy sets

Step 3 CFCMclustering completed in the input-output spacefrom the contexts produced in the error space the obtainedcluster centers are used as the centers of receptive fields in thedesign of local RBFN as shown in Figure 1 for 119901 contexts and119888 clusters for each context the number of nodes in hiddenlayer is 119888 times 119901

Step 4 Calculation of output in the design of local RBFN thefinal output of RBFN is the weighted sum of the output valueassociated with each receptive field as follows

119900 =

119888times119901

sum

119894=1

119908119894119888119894 (4)

The receptive field functions are fixed and then the weightsof the output layer are directly estimated by LSE (Least SquareEstimate) and BP (Back-Propagation) These methods areknown as the most representative techniques frequently usedin conjunctionwith RBFN [2] In order to learn and adapt thearchitecture of RBFN to cope with changing environmentswe need BP learning if we use the steepest descent methodto tune the centers of radial basis function and the outputweights in the design of RBFN Otherwise we can directlyobtain the output weights as one-pass estimation usingLSE

Table 1 Optimal values of the fuzzification coefficient for selectednumber of contexts (p) and clusters (c)

p c3 4 5 6

3 26 21 22 234 22 21 22 25 19 19 19 26 17 18 18 2

Step 5 Calculation of final output of the proposed IRBFNthe granular result of the IRBFN is combined with the outputof the linear part

119884 = 119911 + 119864 (5)

In order to evaluate the overall performance we use standardroot mean square error (RMSE) defined as follows

RMSE = radic 1119873

119873

sum

119896=1

(119884119896minus 119910119896)2 (6)

4 Experimental Results

In the experiments we report on the design and performanceof the proposedmodels to assess the heating load and coolingload requirements of building as a function of buildingparameters All experiments were completed in the 10-foldcross-validation mode with a typical 60ndash40 split betweenthe training and testing data subsets We perform energyanalysis using 12 different building shapes simulated inEcotect [17 18]

These 12 building forms were generated by taking theelementary cube (35 times 35 times 35) where each building form iscomposed of 18 elementsThematerials used for each elemen-tary cube are the same for all building forms The selectionwas made by the newest and most common materials in thebuilding construction industry and by the lowest 119880-value[17] The buildings differ with respect to the glazing areathe glazing area distribution orientation overall height roofarea wall area surface area and relative compactness Thedata set comprises 768 samples and 8 features The attributesto be predicted in terms of the preceding 8 input attributes aretwo real valued responses (heating load and cooling load)

We obtained the experimental results with the twoessential parameters (119901 119888) controlling the granularity of theconstruct in the input and output spaceThe numerical rangeof the fuzzification factor (119898) used in the experiments isbetween 15 and 30 with the incremental step of 01 Table 1listed the optimal values of the fuzzification factor by theincrease of the number of contexts and clusters Figure 3shows the variation of the RMSE caused by the fuzzificationfactor in the case of 119901 = 119888 = 6 for heating load predictionHere the optimal values of the parameters are such that thetesting error becomes minimal

In the conventional method [14] the contexts wereproduced through triangular membership functions equally

4 Computational Intelligence and Neuroscience

1 15 2 25 3 35 421

22

23

24

25

26

27

28

29

3

31

Fuzzification factor (m)

RMSE

RMSE (training)RMSE (testing)

Figure 3 RMSE variation by the values of the fuzzification coeffi-cient (119901 = 119888 = 6)

minus10 minus8 minus6 minus4 minus2 0 2 4 6 8 100

20

40

60

80

100

120

140

160

Error

Num

ber o

f cou

nts

Figure 4 Histogram in the error space

spaced along the domain of an output variable However wemay encounter a data scarcity problem due to small amountsof data included in some contextThus we use a probabilisticdistribution of the output variable to obtain flexible contextsFor this the contexts in the error space are producedbased on a histogram shown in Figure 4 Probability DensityFunction (PDF) and Conditional Density Function (CDF)[13] Figure 5 shows the contexts (119901 = 6) generated inthe error space of LR model as one example among 10-fold cross-validation mode Figure 6 shows the predictionperformance based on local RBFN for the error of LR Asshown in Figure 6 the result clearly shows that the localRBFN with the use of information granulation has goodprediction capability Figures 7 and 8 show the performance

minus6 minus4 minus2 0 2 4 6 80

02

04

06

08

1

Error of LR

Deg

ree o

f mem

bers

hip

Figure 5 Contexts generated in the error space

0 50 100 150 200 250 300minus20

minus15

minus10

minus5

0

5

10

15

20

Number of testing data

Erro

r

Error of LRPredicted error of local RBFN

Figure 6 Prediction by local RBFN for the error of LR

0 50 100 150 200 250 3005

10

15

20

25

30

35

40

45

Number of testing data

Hea

ting

load

Desired outputIRBFN output

Figure 7 Prediction performance for heating load

Computational Intelligence and Neuroscience 5

0 50 100 150 200 250 30010

15

20

25

30

35

40

45

50

Number of testing data

Coo

ling

load

Desired outputIRBFN output

Figure 8 Prediction performance for cooling load

Table 2 Comparison results of RMSE for the prediction of heatingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 2936 2911MLP 36 2883 2890RBFN 36 3707 5199RBFN (CFCM) [14] 36 2767 3106LM 36lowast 4084 4388Proposed incrementalmethod

IRBFN LSE 36 2284 2826IRBFN BP 36 2353 2730

Table 3 Comparison results of RMSE for the prediction of coolingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 3180 3208MLP 36 3176 3226RBFN 36 3601 4812RBFN (CFCM) [14] 36 2866 3388LM 36lowast 3866 4296Proposed incrementalmethod

IRBFN LSE 36 2462 3102IRBFN BP 36 2555 3089

of IRBFN based on LSE for the prediction of heating loadand cooling load respectively Here the number of epochs is1000 and learning rate is 001 respectively As shown in thesefigures the proposed IRBFN showed good generalizationcapability for testing data set respectively Tables 2 and 3listed the comparison results of RMSE for the prediction

of heating and cooling load respectively As listed in thesetables the experimental results revealed that the proposedIRBFN showed good performance in comparison to LRMLP(Multilayer Perceptron) the conventional RBFN RBFN withCFCM clustering and LM (Linguistic Model)

5 Conclusions

We developed the incremental RBFN by combining LRand local RBFN for the prediction of heating load andcooling load of residential buildings It was found from theresult that the proposed IRBFN has good approximationand generalization capabilities with the aid of informationgranulation These results lead us to the conclusion that theproposed IRBFN combined by LR and local RBFN showeda good performance in comparison to the previous worksFor further research we shall design this model to optimizethe number of contexts and clusters per context based onevolutionary algorithm

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was supported by research funds from ChosunUniversity 2012

References

[1] D Simon Evolutionary Optimization Algorithms JohnWiley ampSons 2013

[2] J S R Jang C T Sun and E Mizutani Neuro-Fuzzy andSoft Computing A Computational Approach to Learning andMachine Intelligence Prentice Hall New York NY USA 1997

[3] S Sumathi and S Panneerselvam Computational IntelligenceParadigmsTheory andApplicationsUsingMATLAB CRCPressNew York NY USA 2010

[4] T P Trappenberg Fundamentals of Computational Neuro-science OxfordUniversity Press Oxford UK 2nd edition 2010

[5] G A F Seber Linear Regression Analysis Wiley Series in Prob-ability and Mathematical Statistics John Wiley amp Sons NewYork NY USA 1977

[6] W Pedrycz and K-C Kwak ldquoThe development of incrementalmodelsrdquo IEEE Transactions on Fuzzy Systems vol 15 no 3 pp507ndash518 2007

[7] W Pedrycz and F Gomide Fuzzy Systems Engineering TowardHuman-Centric Computing Wiley-Interscience 2007

[8] W Pedrycz ldquoConditional fuzzy C-meansrdquo Pattern RecognitionLetters vol 17 no 6 pp 625ndash631 1996

[9] L Hu and K C Chan ldquoFuzzy clustering in a complex networkbased on content relevance and link structuresrdquo IEEE Transac-tions on Fuzzy Systems vol 24 no 2 pp 456ndash470 2016

[10] P Fazendeiro and J V De Oliveira ldquoObserver-biased fuzzyclusteringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 1pp 85ndash97 2015

[11] T C Glenn A Zare and P D Gader ldquoBayesian fuzzy clus-teringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 5 pp1545ndash1561 2015

6 Computational Intelligence and Neuroscience

[12] A Proietti L Liparulo and M Panella ldquo2D hierarchical fuzzyclustering using kernel-basedmembership functionsrdquo Electron-ics Letters vol 52 no 3 pp 193ndash195 2016

[13] S-S Kim and K-C Kwak ldquoDevelopment of quantum-basedadaptive neuro-fuzzy networksrdquo IEEE Transactions on SystemsMan and Cybernetics Part B Cybernetics vol 40 no 1 pp 91ndash100 2010

[14] W Pedrycz and A V Vasilakos ldquoLinguistic models and lin-guistic modelingrdquo IEEE Transactions on Systems Man andCybernetics Part B Cybernetics vol 29 no 6 pp 745ndash757 1999

[15] K-C Kwak ldquoA design of genetically optimized linguistic mod-elsrdquo IEICE Transactions on Information and Systems vol E95-Dno 12 pp 3117ndash3120 2012

[16] W Pedrycz ldquoConditional fuzzy clustering in the design of radialbasis function neural networksrdquo IEEE Transactions on NeuralNetworks vol 9 no 4 pp 601ndash612 1998

[17] L Perez-Lombard J Ortiz and C Pout ldquoA review on buildingsenergy consumption informationrdquo Energy and Buildings vol40 no 3 pp 394ndash398 2008

[18] A Tsanas and A Xifara ldquoAccurate quantitative estimation ofenergy performance of residential buildings using statisticalmachine learning toolsrdquo Energy and Buildings vol 49 pp 560ndash567 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience 3

Linear regressionz

Y

x1

x2

Ew

1

w2

w3

w4

c1

c2

c3

c4

R1

R2

R3

R4

osum

sum

RBFN based onCFCM clustering

Figure 2 Main design process of incremental RBFN

regression hyperplane of the linear model b = [a 1198860]119879 thus

we obtain the predicted output by using LR as a global model[6] on the basis of the original data set a collection of input-error pairs is formed (x

119896 119890119896)

Step 2 Construction of the collection of contexts in the errorof the regressionmodel (119864

1 1198642 119864

119901) here 119901 is the number

of contexts the distribution of these fuzzy sets is obtainedthrough statistical method [7 8] mentioned in Section 2the contexts are characterized by triangular membershipfunctions with a 05 overlap between neighboring fuzzy sets

Step 3 CFCMclustering completed in the input-output spacefrom the contexts produced in the error space the obtainedcluster centers are used as the centers of receptive fields in thedesign of local RBFN as shown in Figure 1 for 119901 contexts and119888 clusters for each context the number of nodes in hiddenlayer is 119888 times 119901

Step 4 Calculation of output in the design of local RBFN thefinal output of RBFN is the weighted sum of the output valueassociated with each receptive field as follows

119900 =

119888times119901

sum

119894=1

119908119894119888119894 (4)

The receptive field functions are fixed and then the weightsof the output layer are directly estimated by LSE (Least SquareEstimate) and BP (Back-Propagation) These methods areknown as the most representative techniques frequently usedin conjunctionwith RBFN [2] In order to learn and adapt thearchitecture of RBFN to cope with changing environmentswe need BP learning if we use the steepest descent methodto tune the centers of radial basis function and the outputweights in the design of RBFN Otherwise we can directlyobtain the output weights as one-pass estimation usingLSE

Table 1 Optimal values of the fuzzification coefficient for selectednumber of contexts (p) and clusters (c)

p c3 4 5 6

3 26 21 22 234 22 21 22 25 19 19 19 26 17 18 18 2

Step 5 Calculation of final output of the proposed IRBFNthe granular result of the IRBFN is combined with the outputof the linear part

119884 = 119911 + 119864 (5)

In order to evaluate the overall performance we use standardroot mean square error (RMSE) defined as follows

RMSE = radic 1119873

119873

sum

119896=1

(119884119896minus 119910119896)2 (6)

4 Experimental Results

In the experiments we report on the design and performanceof the proposedmodels to assess the heating load and coolingload requirements of building as a function of buildingparameters All experiments were completed in the 10-foldcross-validation mode with a typical 60ndash40 split betweenthe training and testing data subsets We perform energyanalysis using 12 different building shapes simulated inEcotect [17 18]

These 12 building forms were generated by taking theelementary cube (35 times 35 times 35) where each building form iscomposed of 18 elementsThematerials used for each elemen-tary cube are the same for all building forms The selectionwas made by the newest and most common materials in thebuilding construction industry and by the lowest 119880-value[17] The buildings differ with respect to the glazing areathe glazing area distribution orientation overall height roofarea wall area surface area and relative compactness Thedata set comprises 768 samples and 8 features The attributesto be predicted in terms of the preceding 8 input attributes aretwo real valued responses (heating load and cooling load)

We obtained the experimental results with the twoessential parameters (119901 119888) controlling the granularity of theconstruct in the input and output spaceThe numerical rangeof the fuzzification factor (119898) used in the experiments isbetween 15 and 30 with the incremental step of 01 Table 1listed the optimal values of the fuzzification factor by theincrease of the number of contexts and clusters Figure 3shows the variation of the RMSE caused by the fuzzificationfactor in the case of 119901 = 119888 = 6 for heating load predictionHere the optimal values of the parameters are such that thetesting error becomes minimal

In the conventional method [14] the contexts wereproduced through triangular membership functions equally

4 Computational Intelligence and Neuroscience

1 15 2 25 3 35 421

22

23

24

25

26

27

28

29

3

31

Fuzzification factor (m)

RMSE

RMSE (training)RMSE (testing)

Figure 3 RMSE variation by the values of the fuzzification coeffi-cient (119901 = 119888 = 6)

minus10 minus8 minus6 minus4 minus2 0 2 4 6 8 100

20

40

60

80

100

120

140

160

Error

Num

ber o

f cou

nts

Figure 4 Histogram in the error space

spaced along the domain of an output variable However wemay encounter a data scarcity problem due to small amountsof data included in some contextThus we use a probabilisticdistribution of the output variable to obtain flexible contextsFor this the contexts in the error space are producedbased on a histogram shown in Figure 4 Probability DensityFunction (PDF) and Conditional Density Function (CDF)[13] Figure 5 shows the contexts (119901 = 6) generated inthe error space of LR model as one example among 10-fold cross-validation mode Figure 6 shows the predictionperformance based on local RBFN for the error of LR Asshown in Figure 6 the result clearly shows that the localRBFN with the use of information granulation has goodprediction capability Figures 7 and 8 show the performance

minus6 minus4 minus2 0 2 4 6 80

02

04

06

08

1

Error of LR

Deg

ree o

f mem

bers

hip

Figure 5 Contexts generated in the error space

0 50 100 150 200 250 300minus20

minus15

minus10

minus5

0

5

10

15

20

Number of testing data

Erro

r

Error of LRPredicted error of local RBFN

Figure 6 Prediction by local RBFN for the error of LR

0 50 100 150 200 250 3005

10

15

20

25

30

35

40

45

Number of testing data

Hea

ting

load

Desired outputIRBFN output

Figure 7 Prediction performance for heating load

Computational Intelligence and Neuroscience 5

0 50 100 150 200 250 30010

15

20

25

30

35

40

45

50

Number of testing data

Coo

ling

load

Desired outputIRBFN output

Figure 8 Prediction performance for cooling load

Table 2 Comparison results of RMSE for the prediction of heatingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 2936 2911MLP 36 2883 2890RBFN 36 3707 5199RBFN (CFCM) [14] 36 2767 3106LM 36lowast 4084 4388Proposed incrementalmethod

IRBFN LSE 36 2284 2826IRBFN BP 36 2353 2730

Table 3 Comparison results of RMSE for the prediction of coolingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 3180 3208MLP 36 3176 3226RBFN 36 3601 4812RBFN (CFCM) [14] 36 2866 3388LM 36lowast 3866 4296Proposed incrementalmethod

IRBFN LSE 36 2462 3102IRBFN BP 36 2555 3089

of IRBFN based on LSE for the prediction of heating loadand cooling load respectively Here the number of epochs is1000 and learning rate is 001 respectively As shown in thesefigures the proposed IRBFN showed good generalizationcapability for testing data set respectively Tables 2 and 3listed the comparison results of RMSE for the prediction

of heating and cooling load respectively As listed in thesetables the experimental results revealed that the proposedIRBFN showed good performance in comparison to LRMLP(Multilayer Perceptron) the conventional RBFN RBFN withCFCM clustering and LM (Linguistic Model)

5 Conclusions

We developed the incremental RBFN by combining LRand local RBFN for the prediction of heating load andcooling load of residential buildings It was found from theresult that the proposed IRBFN has good approximationand generalization capabilities with the aid of informationgranulation These results lead us to the conclusion that theproposed IRBFN combined by LR and local RBFN showeda good performance in comparison to the previous worksFor further research we shall design this model to optimizethe number of contexts and clusters per context based onevolutionary algorithm

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was supported by research funds from ChosunUniversity 2012

References

[1] D Simon Evolutionary Optimization Algorithms JohnWiley ampSons 2013

[2] J S R Jang C T Sun and E Mizutani Neuro-Fuzzy andSoft Computing A Computational Approach to Learning andMachine Intelligence Prentice Hall New York NY USA 1997

[3] S Sumathi and S Panneerselvam Computational IntelligenceParadigmsTheory andApplicationsUsingMATLAB CRCPressNew York NY USA 2010

[4] T P Trappenberg Fundamentals of Computational Neuro-science OxfordUniversity Press Oxford UK 2nd edition 2010

[5] G A F Seber Linear Regression Analysis Wiley Series in Prob-ability and Mathematical Statistics John Wiley amp Sons NewYork NY USA 1977

[6] W Pedrycz and K-C Kwak ldquoThe development of incrementalmodelsrdquo IEEE Transactions on Fuzzy Systems vol 15 no 3 pp507ndash518 2007

[7] W Pedrycz and F Gomide Fuzzy Systems Engineering TowardHuman-Centric Computing Wiley-Interscience 2007

[8] W Pedrycz ldquoConditional fuzzy C-meansrdquo Pattern RecognitionLetters vol 17 no 6 pp 625ndash631 1996

[9] L Hu and K C Chan ldquoFuzzy clustering in a complex networkbased on content relevance and link structuresrdquo IEEE Transac-tions on Fuzzy Systems vol 24 no 2 pp 456ndash470 2016

[10] P Fazendeiro and J V De Oliveira ldquoObserver-biased fuzzyclusteringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 1pp 85ndash97 2015

[11] T C Glenn A Zare and P D Gader ldquoBayesian fuzzy clus-teringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 5 pp1545ndash1561 2015

6 Computational Intelligence and Neuroscience

[12] A Proietti L Liparulo and M Panella ldquo2D hierarchical fuzzyclustering using kernel-basedmembership functionsrdquo Electron-ics Letters vol 52 no 3 pp 193ndash195 2016

[13] S-S Kim and K-C Kwak ldquoDevelopment of quantum-basedadaptive neuro-fuzzy networksrdquo IEEE Transactions on SystemsMan and Cybernetics Part B Cybernetics vol 40 no 1 pp 91ndash100 2010

[14] W Pedrycz and A V Vasilakos ldquoLinguistic models and lin-guistic modelingrdquo IEEE Transactions on Systems Man andCybernetics Part B Cybernetics vol 29 no 6 pp 745ndash757 1999

[15] K-C Kwak ldquoA design of genetically optimized linguistic mod-elsrdquo IEICE Transactions on Information and Systems vol E95-Dno 12 pp 3117ndash3120 2012

[16] W Pedrycz ldquoConditional fuzzy clustering in the design of radialbasis function neural networksrdquo IEEE Transactions on NeuralNetworks vol 9 no 4 pp 601ndash612 1998

[17] L Perez-Lombard J Ortiz and C Pout ldquoA review on buildingsenergy consumption informationrdquo Energy and Buildings vol40 no 3 pp 394ndash398 2008

[18] A Tsanas and A Xifara ldquoAccurate quantitative estimation ofenergy performance of residential buildings using statisticalmachine learning toolsrdquo Energy and Buildings vol 49 pp 560ndash567 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

4 Computational Intelligence and Neuroscience

1 15 2 25 3 35 421

22

23

24

25

26

27

28

29

3

31

Fuzzification factor (m)

RMSE

RMSE (training)RMSE (testing)

Figure 3 RMSE variation by the values of the fuzzification coeffi-cient (119901 = 119888 = 6)

minus10 minus8 minus6 minus4 minus2 0 2 4 6 8 100

20

40

60

80

100

120

140

160

Error

Num

ber o

f cou

nts

Figure 4 Histogram in the error space

spaced along the domain of an output variable However wemay encounter a data scarcity problem due to small amountsof data included in some contextThus we use a probabilisticdistribution of the output variable to obtain flexible contextsFor this the contexts in the error space are producedbased on a histogram shown in Figure 4 Probability DensityFunction (PDF) and Conditional Density Function (CDF)[13] Figure 5 shows the contexts (119901 = 6) generated inthe error space of LR model as one example among 10-fold cross-validation mode Figure 6 shows the predictionperformance based on local RBFN for the error of LR Asshown in Figure 6 the result clearly shows that the localRBFN with the use of information granulation has goodprediction capability Figures 7 and 8 show the performance

minus6 minus4 minus2 0 2 4 6 80

02

04

06

08

1

Error of LR

Deg

ree o

f mem

bers

hip

Figure 5 Contexts generated in the error space

0 50 100 150 200 250 300minus20

minus15

minus10

minus5

0

5

10

15

20

Number of testing data

Erro

r

Error of LRPredicted error of local RBFN

Figure 6 Prediction by local RBFN for the error of LR

0 50 100 150 200 250 3005

10

15

20

25

30

35

40

45

Number of testing data

Hea

ting

load

Desired outputIRBFN output

Figure 7 Prediction performance for heating load

Computational Intelligence and Neuroscience 5

0 50 100 150 200 250 30010

15

20

25

30

35

40

45

50

Number of testing data

Coo

ling

load

Desired outputIRBFN output

Figure 8 Prediction performance for cooling load

Table 2 Comparison results of RMSE for the prediction of heatingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 2936 2911MLP 36 2883 2890RBFN 36 3707 5199RBFN (CFCM) [14] 36 2767 3106LM 36lowast 4084 4388Proposed incrementalmethod

IRBFN LSE 36 2284 2826IRBFN BP 36 2353 2730

Table 3 Comparison results of RMSE for the prediction of coolingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 3180 3208MLP 36 3176 3226RBFN 36 3601 4812RBFN (CFCM) [14] 36 2866 3388LM 36lowast 3866 4296Proposed incrementalmethod

IRBFN LSE 36 2462 3102IRBFN BP 36 2555 3089

of IRBFN based on LSE for the prediction of heating loadand cooling load respectively Here the number of epochs is1000 and learning rate is 001 respectively As shown in thesefigures the proposed IRBFN showed good generalizationcapability for testing data set respectively Tables 2 and 3listed the comparison results of RMSE for the prediction

of heating and cooling load respectively As listed in thesetables the experimental results revealed that the proposedIRBFN showed good performance in comparison to LRMLP(Multilayer Perceptron) the conventional RBFN RBFN withCFCM clustering and LM (Linguistic Model)

5 Conclusions

We developed the incremental RBFN by combining LRand local RBFN for the prediction of heating load andcooling load of residential buildings It was found from theresult that the proposed IRBFN has good approximationand generalization capabilities with the aid of informationgranulation These results lead us to the conclusion that theproposed IRBFN combined by LR and local RBFN showeda good performance in comparison to the previous worksFor further research we shall design this model to optimizethe number of contexts and clusters per context based onevolutionary algorithm

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was supported by research funds from ChosunUniversity 2012

References

[1] D Simon Evolutionary Optimization Algorithms JohnWiley ampSons 2013

[2] J S R Jang C T Sun and E Mizutani Neuro-Fuzzy andSoft Computing A Computational Approach to Learning andMachine Intelligence Prentice Hall New York NY USA 1997

[3] S Sumathi and S Panneerselvam Computational IntelligenceParadigmsTheory andApplicationsUsingMATLAB CRCPressNew York NY USA 2010

[4] T P Trappenberg Fundamentals of Computational Neuro-science OxfordUniversity Press Oxford UK 2nd edition 2010

[5] G A F Seber Linear Regression Analysis Wiley Series in Prob-ability and Mathematical Statistics John Wiley amp Sons NewYork NY USA 1977

[6] W Pedrycz and K-C Kwak ldquoThe development of incrementalmodelsrdquo IEEE Transactions on Fuzzy Systems vol 15 no 3 pp507ndash518 2007

[7] W Pedrycz and F Gomide Fuzzy Systems Engineering TowardHuman-Centric Computing Wiley-Interscience 2007

[8] W Pedrycz ldquoConditional fuzzy C-meansrdquo Pattern RecognitionLetters vol 17 no 6 pp 625ndash631 1996

[9] L Hu and K C Chan ldquoFuzzy clustering in a complex networkbased on content relevance and link structuresrdquo IEEE Transac-tions on Fuzzy Systems vol 24 no 2 pp 456ndash470 2016

[10] P Fazendeiro and J V De Oliveira ldquoObserver-biased fuzzyclusteringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 1pp 85ndash97 2015

[11] T C Glenn A Zare and P D Gader ldquoBayesian fuzzy clus-teringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 5 pp1545ndash1561 2015

6 Computational Intelligence and Neuroscience

[12] A Proietti L Liparulo and M Panella ldquo2D hierarchical fuzzyclustering using kernel-basedmembership functionsrdquo Electron-ics Letters vol 52 no 3 pp 193ndash195 2016

[13] S-S Kim and K-C Kwak ldquoDevelopment of quantum-basedadaptive neuro-fuzzy networksrdquo IEEE Transactions on SystemsMan and Cybernetics Part B Cybernetics vol 40 no 1 pp 91ndash100 2010

[14] W Pedrycz and A V Vasilakos ldquoLinguistic models and lin-guistic modelingrdquo IEEE Transactions on Systems Man andCybernetics Part B Cybernetics vol 29 no 6 pp 745ndash757 1999

[15] K-C Kwak ldquoA design of genetically optimized linguistic mod-elsrdquo IEICE Transactions on Information and Systems vol E95-Dno 12 pp 3117ndash3120 2012

[16] W Pedrycz ldquoConditional fuzzy clustering in the design of radialbasis function neural networksrdquo IEEE Transactions on NeuralNetworks vol 9 no 4 pp 601ndash612 1998

[17] L Perez-Lombard J Ortiz and C Pout ldquoA review on buildingsenergy consumption informationrdquo Energy and Buildings vol40 no 3 pp 394ndash398 2008

[18] A Tsanas and A Xifara ldquoAccurate quantitative estimation ofenergy performance of residential buildings using statisticalmachine learning toolsrdquo Energy and Buildings vol 49 pp 560ndash567 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience 5

0 50 100 150 200 250 30010

15

20

25

30

35

40

45

50

Number of testing data

Coo

ling

load

Desired outputIRBFN output

Figure 8 Prediction performance for cooling load

Table 2 Comparison results of RMSE for the prediction of heatingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 2936 2911MLP 36 2883 2890RBFN 36 3707 5199RBFN (CFCM) [14] 36 2767 3106LM 36lowast 4084 4388Proposed incrementalmethod

IRBFN LSE 36 2284 2826IRBFN BP 36 2353 2730

Table 3 Comparison results of RMSE for the prediction of coolingload (mean value of 10-fold cross-validation) (lowast number of rules)

Methods Number ofnodes

RMSE(training)

RMSE(testing)

LR mdash 3180 3208MLP 36 3176 3226RBFN 36 3601 4812RBFN (CFCM) [14] 36 2866 3388LM 36lowast 3866 4296Proposed incrementalmethod

IRBFN LSE 36 2462 3102IRBFN BP 36 2555 3089

of IRBFN based on LSE for the prediction of heating loadand cooling load respectively Here the number of epochs is1000 and learning rate is 001 respectively As shown in thesefigures the proposed IRBFN showed good generalizationcapability for testing data set respectively Tables 2 and 3listed the comparison results of RMSE for the prediction

of heating and cooling load respectively As listed in thesetables the experimental results revealed that the proposedIRBFN showed good performance in comparison to LRMLP(Multilayer Perceptron) the conventional RBFN RBFN withCFCM clustering and LM (Linguistic Model)

5 Conclusions

We developed the incremental RBFN by combining LRand local RBFN for the prediction of heating load andcooling load of residential buildings It was found from theresult that the proposed IRBFN has good approximationand generalization capabilities with the aid of informationgranulation These results lead us to the conclusion that theproposed IRBFN combined by LR and local RBFN showeda good performance in comparison to the previous worksFor further research we shall design this model to optimizethe number of contexts and clusters per context based onevolutionary algorithm

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was supported by research funds from ChosunUniversity 2012

References

[1] D Simon Evolutionary Optimization Algorithms JohnWiley ampSons 2013

[2] J S R Jang C T Sun and E Mizutani Neuro-Fuzzy andSoft Computing A Computational Approach to Learning andMachine Intelligence Prentice Hall New York NY USA 1997

[3] S Sumathi and S Panneerselvam Computational IntelligenceParadigmsTheory andApplicationsUsingMATLAB CRCPressNew York NY USA 2010

[4] T P Trappenberg Fundamentals of Computational Neuro-science OxfordUniversity Press Oxford UK 2nd edition 2010

[5] G A F Seber Linear Regression Analysis Wiley Series in Prob-ability and Mathematical Statistics John Wiley amp Sons NewYork NY USA 1977

[6] W Pedrycz and K-C Kwak ldquoThe development of incrementalmodelsrdquo IEEE Transactions on Fuzzy Systems vol 15 no 3 pp507ndash518 2007

[7] W Pedrycz and F Gomide Fuzzy Systems Engineering TowardHuman-Centric Computing Wiley-Interscience 2007

[8] W Pedrycz ldquoConditional fuzzy C-meansrdquo Pattern RecognitionLetters vol 17 no 6 pp 625ndash631 1996

[9] L Hu and K C Chan ldquoFuzzy clustering in a complex networkbased on content relevance and link structuresrdquo IEEE Transac-tions on Fuzzy Systems vol 24 no 2 pp 456ndash470 2016

[10] P Fazendeiro and J V De Oliveira ldquoObserver-biased fuzzyclusteringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 1pp 85ndash97 2015

[11] T C Glenn A Zare and P D Gader ldquoBayesian fuzzy clus-teringrdquo IEEE Transactions on Fuzzy Systems vol 23 no 5 pp1545ndash1561 2015

6 Computational Intelligence and Neuroscience

[12] A Proietti L Liparulo and M Panella ldquo2D hierarchical fuzzyclustering using kernel-basedmembership functionsrdquo Electron-ics Letters vol 52 no 3 pp 193ndash195 2016

[13] S-S Kim and K-C Kwak ldquoDevelopment of quantum-basedadaptive neuro-fuzzy networksrdquo IEEE Transactions on SystemsMan and Cybernetics Part B Cybernetics vol 40 no 1 pp 91ndash100 2010

[14] W Pedrycz and A V Vasilakos ldquoLinguistic models and lin-guistic modelingrdquo IEEE Transactions on Systems Man andCybernetics Part B Cybernetics vol 29 no 6 pp 745ndash757 1999

[15] K-C Kwak ldquoA design of genetically optimized linguistic mod-elsrdquo IEICE Transactions on Information and Systems vol E95-Dno 12 pp 3117ndash3120 2012

[16] W Pedrycz ldquoConditional fuzzy clustering in the design of radialbasis function neural networksrdquo IEEE Transactions on NeuralNetworks vol 9 no 4 pp 601ndash612 1998

[17] L Perez-Lombard J Ortiz and C Pout ldquoA review on buildingsenergy consumption informationrdquo Energy and Buildings vol40 no 3 pp 394ndash398 2008

[18] A Tsanas and A Xifara ldquoAccurate quantitative estimation ofenergy performance of residential buildings using statisticalmachine learning toolsrdquo Energy and Buildings vol 49 pp 560ndash567 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

6 Computational Intelligence and Neuroscience

[12] A Proietti L Liparulo and M Panella ldquo2D hierarchical fuzzyclustering using kernel-basedmembership functionsrdquo Electron-ics Letters vol 52 no 3 pp 193ndash195 2016

[13] S-S Kim and K-C Kwak ldquoDevelopment of quantum-basedadaptive neuro-fuzzy networksrdquo IEEE Transactions on SystemsMan and Cybernetics Part B Cybernetics vol 40 no 1 pp 91ndash100 2010

[14] W Pedrycz and A V Vasilakos ldquoLinguistic models and lin-guistic modelingrdquo IEEE Transactions on Systems Man andCybernetics Part B Cybernetics vol 29 no 6 pp 745ndash757 1999

[15] K-C Kwak ldquoA design of genetically optimized linguistic mod-elsrdquo IEICE Transactions on Information and Systems vol E95-Dno 12 pp 3117ndash3120 2012

[16] W Pedrycz ldquoConditional fuzzy clustering in the design of radialbasis function neural networksrdquo IEEE Transactions on NeuralNetworks vol 9 no 4 pp 601ndash612 1998

[17] L Perez-Lombard J Ortiz and C Pout ldquoA review on buildingsenergy consumption informationrdquo Energy and Buildings vol40 no 3 pp 394ndash398 2008

[18] A Tsanas and A Xifara ldquoAccurate quantitative estimation ofenergy performance of residential buildings using statisticalmachine learning toolsrdquo Energy and Buildings vol 49 pp 560ndash567 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014