prediction of coal grindability based on petrography, proximate and ultimate analysis using multiple...

Upload: rizal-ahmad-mubarok

Post on 15-Oct-2015

38 views

Category:

Documents


2 download

DESCRIPTION

jurnal internasional

TRANSCRIPT

  • aiso

    orj

    A R T I C L E I N F O

    Article history:

    2 2

    reliable and accurate method, in the hardgrove grindability index prediction.

    Coal petrography

    Grindability of coal is an important technological parameter in some researchers have investigated the prediction of HGI

    F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0

    omassessing the relative hardness of coals of varying ranks andgrades during comminution [1]. This is usually determined byHardgrove Grindability Index (HGI), which is of great interestsince it is used as a predictive tool to determine theperformance capacity of industrial pulverizers in powerstation boilers [2]. HGI reflects the coal hardness, tenacity,and fracture and is directly related to the coal rank,megascopic

    based on proximate analysis, petrography, and vitrinitemaximum reflectance with using regression [710].

    Artificial neural network (ANN) is an empirical modelingtool, which is analogous to the behavior of biological neuralstructures [11]. Neural networks are powerful tools that havethe abilities to identify underlying highly complex relation-ships from inputoutput data only [12]. Over the last 10 years, 2007 Elsevier B.V. All rights reserved.

    1. Introduction Although the HGI testing device is not costly, the measur-ing procedure to get a HGI value is time consuming. Therefore,coal lithology, microscopic maceral associand distribution of minerals [3]. Grindimportant in mining applications since logrind) lithotypes will require a greater ene

    Corresponding author. Tel.: +98 912 1776737E-mail address: [email protected] (E. Jo

    0378-3820/$ see front matter 2007 Elsevidoi:10.1016/j.fuproc.2007.06.004multivariable regression (R =0.81) and also artificial neural network methods (R =0.95).The ANN based prediction method, as used in this paper, can be further employed as aCoal rankUltimate and proximate analysisArtificial neural networkity of Kentucky, 2540 Research Park Drive, Lexington, KY 40511, USA

    A B S T R A C T

    The effects of proximate and ultimate analysis, maceral content, and coal rank (Rmax) for awide range of Kentucky coal samples from calorific value of 4320 to 14960 (BTU/lb) (10.05 to34.80 MJ/kg) on Hardgrove Grindability Index (HGI) have been investigated by multivariableregression and artificial neural network methods (ANN). The stepwise least squaremathematical method shows that the relationship between (a) Moisture, ash, volatilematter, and total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen+nitrogen)/carbon)andmoisture; (c) ln (exinite), semifusinite, micrinite, macrinite, resinite, and Rmax input setswith HGI in linear condition can achieve the correlation coefficients (R2) of 0.77, 0.75, and0.81, respectively. The ANN, which adequately recognized the characteristics of the coalsamples, can predict HGI with correlation coefficients of 0.89, 0.89 and 0.95 respectively intesting process. It was determined that ln (exinite), semifusinite, micrinite, macrinite,resinite, and Rmax can be used as the best predictor for the estimation of HGI onReceived 27 April 2007Received in revised form 8 June 2007Accepted 22 June 2007

    Keywords:Hardgrove grindability indexaDepartment of Mining Engineering, Research and Science Campus, Islamic Azad University, Poonak, Hesarak Tehran, IranbCenter for Applied Energy Research, UniversPrediction of coal grindability bproximate and ultimate analysand artificial neural network m

    S. Chehreh Chelgania, James C. Howerb, E. J

    www.e l sev i e r. cations, and the typeing properties arewer-HGI (harder torgy input [46].

    ; fax: +98 21 44817194.rjani).

    er B.V. All rights reservedsed on petrography,using multiple regressiondels

    ania,, Sh. Mesroghlia, A.H. Bagherieha

    / l oca te / fup rocartificial neural networks (ANNs), and, in particular, feed-forward artificial neural networks (FANNs), have been exten-sively studied to present process models, and their use inindustry has been rapidly growing [13].

    .

  • Li et al. [14] discussed neural network analyses using 67coals of a wide rank range of coal quality for the prediction ofthe HGI on the basis of the proximate analysis. A problemwiththeir analysis was the use of a rank range which spanned thereversal of HGI values in themedium volatile bituminous rankrange. Also Bagherieh et al. [15] used vitrinite, inertinite,liptinite, Rmax, fusinite, ash and sulfur analysis on 195 sets ofdata and improved result of ANN prediction to R2=0.92 fortesting data.

    The aim of the presentwork is the assessment of propertiesofmore than 600 Kentucky coals with reference to the HGI andpossible variations with respect to vitrinite maximum reflec-tion, proximate and ultimate analysis, and petrography of coalusing the multivariable regression, SPSS software package,also improvement of results by artificial neural network,

    2. Experimental data

    3. Results and discussion

    +0.619, respectively. The results are shown that highermoisture and total sulfur contents in coal can result higherHGI and higher volatile matter content in coal results lowerHGI.

    The following equations resulted between HGI and proxi-mate analysis:

    HGI 102:69 4:227Stotal 1:634V 0:569A 0:237MR2 0:77: 1

    The distribution of difference between HGI predicted fromEq. (1) andactual determinedamounts ofHGI is shown in Fig. 1.

    3.1.2. Ultimate analysis and moistureVuthaluru et al. [17] studied the effects of moisture and coalblending on HGI for Collie coal of Western Australia, finding asignificant effect of moisture content on HGI [17].

    In the current study, the best correlation was foundbetween ln Stotal, ln (oxygen+nitrogen/carbon) ((O+N)/C),hydrogen (H), ash (A) and moisture (M) with HGI. By a leastsquare mathematical method, the correlation coefficients ofln Stotal, ln ((O+N)/C) and H with HGI value was determined tobe +0.662, +0.198 and 0.263, respectively. The results show

    14 F U E L P R O C E S S I N G T E C H N3.1. Multivariable correlation of HGI with macerals, Rmax,proximate and ultimate analysis

    3.1.1. Proximate analysisSengupta [1] examined relation between proximate analyses,moisture, ash, volatile matter, and fixed carbon with HGI and

    Table 1 The ranges of variables in coal samples (asdetermined)

    Variable Min Max Mean Standard deviation

    Moisture 0.80 13.2 3.95 2.66Ash 0.64 59.8 10.3 8.55Volatile matter 16.6 46.4 34.8 3.92Total sulfur 0.30 7.60 1.84 1.56Carbon 28.9 83.5 70.8 8.44Hydrogen 2.48 6.05 5.15 0.52Nitrogen 0.09 2.34 1.49 0.24Oxygen 0.00 20.6 10.4 3.14Resinite 0.00 7.8 0.64 0.64Exinite 0.6 52.9 6.27 4.36Macrinite 0.0 12.5 0.25 0.81Micrinite 0.0 34.9 2.86 2.63A mathematical model requires a comprehensive database tocover a wide variety of coal types. Such amodel will be capablefor predicting of HGI with a high degree of accuracy. Data usedto test the proposed approaches are from studies conducted atthe University of Kentucky Center for Applied Energy Re-search. A total of more than 600 sets of data were used.MATLAB software package.This work is an attempt to solve the following important

    questions: (a) Is there a suitable multivariable relationshipbetween vitrinite maximum reflection, petrography, proximateand ultimate analysis with HGI for a wide range of Kentuckycoals? (b) Can we improve the correlation of predicted HGI withactual measured HGI by using of artificial neural network?Semifusinite 0.0 47.1 5.52 5.70Rmax 0.4 1.1 0.87 0.16found a nonlinear second-order regression equation withcorrelation coefficient (r) of 0.93. [1]. A problem with theiranalysis was the use of all four parameters, fixed carbon,moisture, volatilematter, and ash. It is not necessary to use allfour parameters since, by definition; the four parameters are aclosed system, adding to 100% [16].

    In the current study, it was found that use of moisture, ash,volatile matter, and total sulfur can achieve the best results topredict the HGI. The ranges of input variables to HGI predictionfor the 633 Kentucky samples are shown in Table 1. By a leastsquare mathematical method, the correlation coefficients ofmoisture (M), ash (A), volatile matter (V) and total sulfur withHGI value were determined to be +0.184, +0.107, 0.160 and

    Fig. 1 Distribution of difference between actual HGI andestimated (Eq. (1)).

    O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0that higher hydrogen content in coal can result lower HGI andhigher ln Stotal content results higher HGI.

  • (SF), micrinite (MI), macrinite (MA), resinite (R), and Rmax arethe variables that are the best constituents of multivariableregression. The ranges of petrography components for theKentucky samples are shown in Table 1.

    By a least square mathematical method, the correlationcoefficients of ln (exinite), semi fusinite, micrinite, macrinite,resinite and Rmax with HGI are 0.814, 0.360, +0.588, 0.090,0.448, and 0.116, respectively. The results show thatincrease of the ln (exinite), semifusinite and resinite contentsin coal can decrease HGI. An increase in micrinite results inhigher HGI.

    An equation between mentioned parameters and HGI canbe shown as follows:

    HGI 48:175 7:679lnEx 13:269Rmax 0:137SF 0:584MI 1:237MA 1:171R

    R2 0:81: 4The distribution of difference between HGI predicted from

    Eq. (4) and actual determined amounts of HGI is shown inFig. 3.

    15N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0The following equation resulted between HGI and ultimateanalysis:

    HGI 77:162 3:994lnStotal 10:920H 1:904M 0:424A 11:765lnON=C

    R2 0:75: 2The distribution of difference between HGI predicted from

    Eq. (2) and actual determined amounts ofHGI is shown in Fig. 2.

    3.1.3. Petrography and RmaxHardgrove grindability index is primarily a function of the

    Fig. 2 Distribution of difference between actual HGI andestimated (Eq. (2)).

    F U E L P R O C E S S I N G T E C Hmaceral composition, more precisely the mix of macerals. Thegreater amount of Liptinite macerals such as sporinite, cutinite,resinite, andalginite, fromspores, leaf cuticles, resins, andalgae,respectively, particularly in combination with finely-dispersedinertinite macerals, can result in a lower grindability index [18].

    HGI is not simply a function of themaceral content though.Through the rank range present through most of the CentralAppalachians, HGI will increase with an increase in rank. Theinfluence of mineral matter on HGI is also complex [18].

    The relationship between HGI and coal petrography wasstudied by Hsieh [19], Chandra andMaitra [20], Hower et al. [7],Hower and Wild [8], Hower [9], and Trimble and Hower [10].Trimble and Hower evaluated the influence of maceralsmicrolithotypes on HGI and on pulverizer performance indifferent reflectance range [10].

    Hower and Wild examined 656 Kentucky coal samplesto determine the relationship betweenproximate andultimateanalysis, petrography, andvitrinitemaximumreflectancewithHGI for both eastern and western Kentucky. For easternKentucky, the subject of the investigations in this paper, theyfound that HGI could be predicted as following equation [8]:

    HGI 37:41 10:22lnliptinite 28:18Rmax StotalR2 0:64: 3

    In the presentwork,macerals and Rmax were used as inputsto the SPSS software and found that ln (exinite), semifusinite3.2. Artificial neural network

    Neural networks can be seen as a legitimate part of statisticsthat fits snugly in the niche between parametric and non-parametric methods [21]. They are non-parametric, since theygenerally do not require the specification of explicit processmodels, but are not quite as unstructured as some statisticalmethods in that they adhere to a general class of models. Inthis context, neural networks have been used to extend, ratherthan replace, regressionmodels, principal component analysis[22,23], principal curves [24], partial least squaresmethods [25],as well as the visualization of process data in several majorways, to name but a few. In addition, the argument that neuralnetworks are really highly parallelized neurocomputers orhardware devices and should therefore be distinguished fromFig. 3 Distribution of difference between actual HGI andestimated (Eq. (4)).

  • re w

    16 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0statistical or other patter recognition algorithms is not entirelyconvincing. In the vast majority of cases neural networks aresimulated on single processor machines. There is no reasonwhy other methods cannot also be simulated or executed in asimilar way (and are indeed) [21].

    Artificial neural networks (ANN) are simplified systemssimulating the intelligent behavior exhibited by animals viamimicking the types of physical connections occurring in their

    Fig. 4 FANN architectubrains [26]. Derived from their biological counterparts, ANNsare based on the concept that a highly interconnected systemof simple processing elements (also called nodes or neu-rons) can learn complex nonlinear interrelationships existingbetween input and output variables of a data set [27].

    Table 2 Details of ANN-based HGI models

    Modelno.

    Basis Model inputs Trainingset size

    Testsetsize

    I J K

    I Asdetermined

    Moisture, totalsulfur volatilematter, ash

    400 232 4 12

    II Asdetermined

    Carbon,hydrogen,oxygen+nitrogen, ln(Stotal), moisture

    400 200 5 12

    III Asdetermined

    Resinite,micrinite,macrinite ln(exinite),Semifusinite,Rmax,

    400 201 3 5 6

    I = No. of input nodes; J = No. of nodes in the first hidden layer; K =No. of nodes in the second hidden layer.The main advantage of ANN is the ability to model aproblem by the use of examples (i.e. data driven), rather thandescribing it analytically. ANN's are also very powerful toeffectively represent complex nonlinear systems. It is alsoconsidered as a nonlinear statistical identification technique[11].

    For developing a nonlinear ANN model of a system, feed-forward architecture namely MLP is most commonly used.

    ith two hidden layers.This network usually consists of a hierarchical structure ofthree layers described as input, hidden, and output layers,comprising I, J, and K number of processing nodes, respec-tively. At times, two hidden layers (Fig. 4) are used betweeninput and output layers of the net work. Each node in the inputlayer is linked to all the nodes in the hidden layer usingweighted {wij} connections. Similar connections exist betweenhidden and output layer as also between hidden layer-I andhidden layer-II nodes [26]. Feed-forward networks consist of Nlayers using the dot prod weight function, netsum net inputfunction, and the specified transfer functions [28].

    The first layer has weights coming from the input. Eachsubsequent layer has aweight coming from the previous layer.All layers have biases. The last layer is the network output [28].

    Table 3 Statistical analysis of HGI generalizationperformance of ANN-based

    Models Performance of ANNmodels

    Performance of ANNmodels

    Train set Test set

    Correlation coefficient Correlation coefficient

    I 0.82 0.89II 0.81 0.89III 0.86 0.95

  • Fig. 5 Predicted HGI by neural network versus actual

    F U E L P R O C E S S I N G T E C H NBack propagation can train multilayer feed-forward net-works with differentiable transfer functions to performfunction approximation, pattern association, and patternclassification. The term back propagation refers to the processby which derivatives of network error, with respect to networkweights and biases, can be computed. This process can beused with a number of different optimization strategies [28].

    However, the number of nodes (J,K) in the hidden layers areadjustable parameters, whose magnitudes are governed byissues such as the desired prediction accuracy and general-ization performance of the ANN model. In order that the MLPnetwork accurately approximates the nonlinear relationshipexisting between its inputs and the outputs, it is trained suchthat a pre-specified error function is minimized. This trainingprocedure essentially aims at obtaining an optimal set ofnetwork connection weights that minimizes a pre-specifiederror function [29].

    In this study, two ANN models (models I and II) have beendeveloped by considering one hidden layers and the third one(Model III) by considering two hidden layer inMLP architecture

    measured HGI in testing process (Model I).and with training using the EBP algorithm (Table 2). Accordingto the Eqs. (2), (3) and (5), the selected variables were

    Fig. 6 Predicted HGI by neural network versus actualmeasured HGI in testing process (Model II).determined as the best variables for the prediction of HGI.Therefore these variables were used as inputs to ANN for theimprovement of HGI prediction.

    Neural network training can be made more efficient bycertain pre-processing steps. In the present work all inputs(before feeding to the network) and output data (in models Iand III) in training phase, were scaled so that they changed inthe range of 0 and 1, using the mean and standard deviation:

    pn ApmeanAps=stdAp: 5

    Where, Ap is actual parameter, meanAps is mean of actualparameters, stdAp is standard deviation of actual parameterand pn is normalized parameter (input) [28].

    While the training set was used in the EBP algorithm-basediterativeminimization of error, the test setwas used after eachtraining iteration for assessing the generalization ability ofMLP model.

    Prediction and generalization performances of ANNmodelsI, II and III were compared with results of Eqs. (2), (3) and (5),respectively. The results are shown in Table 3. The training

    Fig. 7 Predicted HGI by neural network versus actualmeasured HGI in testing process (Model III).17O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0process was stopped after 3000 for models I and II and 5000epochs for Model III. The performance function used is themean square error (MSE), the average squared error betweenthenetwork predicted outputs and the target outputs, thatwas0.18, 6.47, 0.14 for training data for models I to III, respectively.Figs. 57 and 11(a,b,c) shows the predicted data using FANNversus actual data in testing process. The distribution ofdifference between HGI calculated from described ANNprocedures and actual determined HGIs are shown in Figs. 810. Theabovedescribe results suggest thatANNsowing to theirexcellent nonlinear modeling ability are better alternative tothe linear models for the prediction of HGI of coals.

    4. Technical considerations

    According to Eq. (1), which presents the relation of HGI withmoisture, volatile matter, ash, and total sulfur for 632 coalsamples, the correlation coefficient of nonlinear-regression-estimated HGI and actual determined HGI is R2=0.77. Fig. 5shows a better correlation coefficient of R2=0.89 than

  • Fig. 8 Graphical comparison of experimental HGIs with

    18 F U E L P R O C E S S I N G T E C H Nregression for estimating HGI with test data sets using FANNthat 400 data sets were used for training and 232 data setswere used for the test. In the related works to this study, Li etal. [14] applied neural network analyses, generalized regres-sion neural network (GRNN), using only 67 coal samples, 61data sets for training and six data sets for the test in the

    those estimated by ANN model-I (panel a), ANN model-II(panel b), ANN model-III (panel c).prediction of HGI on the basis of the proximate analysis. Asnoted above, their study was flawed because of the use ofcoals on both sides of the medium volatile bituminousreversal of HGI. Also, Sengupta [1] examined the relation

    Fig. 9 Distribution of difference between actual HGI andestimated by neural network (Model I).between proximate analyses and HGI and found a correlationcoefficient (r) of 0.93 [1]. The problem of his work was the useof all four parameters: fixed carbon, moisture, volatile matterand ash that are a closed system, adding to 100% [16]. As aresult, in the present work, the interrelationship between coalproperties was considered, achieving a higher correlation(R2=0.89), avoiding the problems that was mentioned in theprevious works.

    In Eq. (2), the relation of HGI with ln Stotal, hydrogen,moisture, ash, ln ((oxygen+nitrogen)/carbon) was presented.

    Fig. 10 Distribution of difference between actual HGI andestimated by neural network (Model II).

    O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0To our knowledge, this is the first time that the mentionedparameters were used to predict HGI using multivariableregression and ANNs (400 and 200 data sets were used fortraining and testing, respectively). The high correlation coeffi-cients ofR2=0.89 forHGI prediction byANNs is evidence that theproposed neural network model can accurately estimate theHGI with the ultimate analysis and moisture as the predictors.

    In Eq. (4), which presents the relation of HGI with ln(exinite), semifusinite, micrinite, macrinite, resinite, and Rmaxfor 601 coal samples, the correlation coefficient betweennonlinear-regression-estimated HGI and actual determinedHGI is R2=0.81.

    As a related work to this one, Hower and Wild [8] studiedrelationship between sulfur, petrography, and vitrinite max-imum reflectance with HGI for eastern Kentucky coals andfound a correlation of 0.64 for which liptinite, reflectance, andsulfur emerged as significant predictors. In this work, a bettercorrelation coefficient (0.81) was achieved in a linear equationin which ln (exinite), semifusinite, micrinite, macrinite,resinite, and Rmax were predictors.

    Fig. 7 shows a better correlation coefficient of R2=0.95 thanregression for estimating HGI, with test data sets using FANNin which 400 data sets were used for training and 201 data setswere used for the test. Bagherieh et al. [15] applied generalizedregression neural network analyses using 195 coal samples,148 data sets for training and 33 data sets for the test in the

  • F U E L P R O C E S S I N G T E C Hprediction of HGI on the basis of the petrography. Thecorrelation coefficient (R2) of the predicted HGI with actualdetermined was 0.92 for testing data. In the current work itwas used from wide range (201 data sets) of coal sample fortesting and the results were improved by FANN to R2=0.95,which is the highest correlation coefficient that was reporteduntil now.

    According to the above significant results, it can beconcluded that the proposed multiple regression formulas(Eqs. (2), (3) and (5)) and the ANN procedures yield significantpredictions of HGI. As a comparison between inputs to themodels, the coal macerals and Rmax are better predictors inregression and ANN procedures than the others (Table 3).

    5. Conclusions

    Three data sets of: (a) Moisture, ash, volatile matter, andtotal sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen+nitrogen)/carbon) andmoisture; (c) ln (exinite), semifusinite,micrinite, macrinite, resinite, and Rmax were found to be the

    Fig. 11 Distribution of difference between actual19N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0best constituents of multivariable regression for the predic-tion of HGI.

    Highermoisture content in coal can result in higher HGI andhigher volatile matter content in coal results in lower HGI.No other (a) set parameters were significant.

    The increase of hydrogen content in coal can result in lowerHGI and higher ln (Stotal) result in higher HGI.

    Higher ln (exinite), semi fusinite and resinite contents incoal decrease HGI. An increase in micrinite results in higherHGI. No other macerals were significant.

    The proposed multivariable equations: Eq. (1) withmoisture, ash, volatilematter, and total sulfur

    input set achieved an R2=0.77. Eq. (2) with ln (total sulfur), hydrogen, ash, ln ((oxygen+

    nitrogen)/carbon) and moisture input set resulted in anR2=0.75.

    Eq. (4) with ln (exinite), semifusinite, micrinite, macrinite,resinite, and Rmax input set resulted in the best regressioncorrelation reported until now (R2=0.81).

    The FANN procedures used to improve of correlation co-efficients between predicted HGIs and actual determined

    HGI and estimated by neural network (Model III).

  • HGIs, with a good resulting R2=0.89, 0.89, 0.95 for the inputsets of (a), (b) and (c) respectively, had not been previouslyreported.

    ln (exinite), semifusinite, micrinite, macrinite, resinite andRmax are the best predictors for the estimation of HGI byboth multivariable regression and artificial neural networkmethods.

    R E F E R E N C E S

    Institute of Chemical Engineers Symposium Series 92 (1996)5766.

    [14] P. Li, Y. Xiong, D. Yu, X. Sun, Prediction of grindability withmultivariable regression and neural network in Chinese coal,Fuel 84 (2005) 23842388.

    [15] A.H. Bagherieh, J.C. Hower, A.R. Bagherieh, E. Jorjani, Studiesof the relationship between petrography and grindability forKentucky coals using artificial neural network. InternationalJournal of Coal Geology (in press).

    [16] J.C. Hower, Letter to the editor, discussion: prediction ofgrindability with multivariable regression and neuralnetwork in Chinese coal, Fuel 85 (2006) 13071308.

    [17] H.B. Vuthaluru, R.J. Brooke, D.K. Zhang, H.M. Yan, Effect ofmoisture and coal blending on Hardgrove Grindability Indexof Western Australian coal, Fuel Processing Technology 81(2003) 6776.

    20 F U E L P R O C E S S I N G T E C H N O L O G Y 8 9 ( 2 0 0 8 ) 1 3 2 0Fuel Processing Technology 76 (1) (2002) 110.[2] X. Sun, Combustion Experiment Technology and Method for

    Coal Fired Furnace, China Electricity and Power Press, Beijing,2001.

    [3] S. Ural, M. Akyildiz, Studies of relationship between mineralmatter and grinding properties for low-rank coal,International Journal of Coal Geology 60 (2004) 8184.

    [4] M.-Th. Mackrowsky, C. Abramski, KohlenpetrographischeUntersuchengsmethoden und ihre praktische Anwendung,Feuerungstechnik 31 (3) (1943) 4964.

    [5] J.T. Peters, N. Schapiro, R.J. Gray, Know your coal,Transactions of the American Institute of Mining andMetallurgical Engineers 223 (1962) 16.

    [6] J.C. Hower, G.T. Lineberry, The interface of coal lithology andcoal cutting: study of breakage characteristics of selectedKentucky coals, Journal of Coal Quality 7 (1988) 8895.

    [7] J.C. Hower, A.M. Graese, J.G. Klapheke, Influence ofmicrolithotype composition on Hardgrove Grindability Indexfor selected Kentucky coals, International Journal of CoalGeology 7 (1987) 227244.

    [8] J.C. Hower, G.D. Wild, Relationships between HardgroveGrindability Index and petrographic composition forhigh-volatile bituminous coals fromKentucky, Journal of CoalQuality 7 (1988) 122126.

    [9] J.C. Hower, Interrelationship of coal grinding properties andcoal petrology, Minerals and Metallurgical Processing 15 (3)(1998) 116.

    [10] A.S. Trimble, J.C. Hower, Studies of relationship between coalpetrology and grinding properties, International Journal ofCoal Geology 54 (2002) 253260.

    [11] H.M. Yao, H.B. Vuthaluru, M.O. Tade, D. Djukanovic, Artificialneural network-based prediction of hydrogen content of coalin power station boilers, Fuel 84 (2005) 15351542.

    [12] S. Haykin, Neural Networks, a Comprehensive Foundation,USA, 2nd ed.Prentice Hall, USA, 1999.

    [13] L.H. Ungar, E.J. Hartman, J.D. Keeler, G.D. Martin, Processmodelling and control using neural networks, American[18] J.C. Hower, C.F. Eble, Coal quality and coal utilization, EnergyMinerals Division Hourglass 30 (7) (February 1996) 18.

    [19] S.-S. Hsieh, Effects of bulk-components on the grindability ofcoals (Ph.D dissertation, The Pennsylvania State University,University Park, 1976).

    [20] U. Chandra, A. Maitra, A study on the effect of vitrinitecontent on coal pulverization and preparation, Journal ofIndian Academy of Geosciences 19 (2) (1976) 9.

    [21] C. Aldrich, Exploratory Analysis of Metallurgical ProcessData with Neural Networks and Related Methods, Elsevier,2002, p. 5.

    [22] M.A. Kramer, Nonlinear principal component analysis usingautoassociative neural networks, AIChE 37 (2) (1991) 233243.

    [23] M.A. Kramer, Autoassociative neural networks, Computersand Chemical Engineering 16 (4) (1992) 313328.

    [24] D. Dong, T.J. McAvoy, Non-liner principal componentanalysis-based on principal curves and neural networks,Computers and Chemical Engineering 20 (1996) 6578.

    [25] S. Qin, T.J. McAvoy, Nonlinear PLS modeling using neuralnetworks, Computers and Chemical Engineering 16 (1992)379391.

    [26] S.U. Patel, B.J. Kumar, Y.P. Badhe, B.K. Sharma, S. Saha, S.Biswas, A. Chaudhury, S.S. Tambe, B.D. Kulkarni, Estimationof gross calorific value of coals using artificial neuralnetworks, Fuel 86 (2007) 334344.

    [27] S.S. Tambe, B.D. Kulkarni, P.B. Deshpande, Elements ofArtificial Neural Networks with Selected Applications inChemical Engineering, and Chemical and Biological Sciences,Simulation and Advanced Controls, Louisville, KY, 1996.

    [28] H. Demuth, M. Beale, Neural network toolbox for use withMATLAB, Handbook, 2002.

    [29] D. Rumelhart, G. Hinton, R. Williams, Learningrepresentations by backpropagating error, Nature 323 (1986)533536.[1] A.N. Sengupta, An assessment of grindability index of coal,

    Prediction of coal grindability based on petrography, proximate and ultimate analysis using mul.....IntroductionExperimental dataResults and discussionMultivariable correlation of HGI with macerals, Rmax, proximate and ultimate analysisProximate analysisUltimate analysis and moisturePetrography and Rmax

    Artificial neural network

    Technical considerationsConclusionsReferences