design of meta-heuristic computing paradigm for …

188
DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR MATHEMATICAL MODEL OF BIOINFORMATICS By AYAZ HUSSAIN BUKHARI DOCTOR OF PHILOSOPHY IN MATHEMATICS DEPARTMENT OF MATHEMATICS FACULTY OF PHYSICAL & NUMERICAL SCIENCES ABDUL WALI KHAN UNIVERSITY MARDAN KHYBER PAKHTUNKHWA, PAKISTAN 2020

Upload: others

Post on 22-Dec-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR MATHEMATICAL MODEL OF

BIOINFORMATICS

By

AYAZ HUSSAIN BUKHARI

DOCTOR OF PHILOSOPHY

IN MATHEMATICS

DEPARTMENT OF MATHEMATICS

FACULTY OF PHYSICAL & NUMERICAL SCIENCES

ABDUL WALI KHAN UNIVERSITY MARDAN

KHYBER PAKHTUNKHWA, PAKISTAN

2020

Page 2: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

2

-Author’s Declaration

I, Ayaz Hussain Bukhari -, hereby state that my PhD thesis titled as “Design of meta-heuristic

computing paradigm for mathematical model of bioinformatics” is my own work and has not

been submitted previously by me for taking any degree from this university “Abdul Wali Khan

University, Mardan” or anywhere else in the country/world. At any time if my statement is

found to be incorrect even after my Graduate the university has the right to withdraw my PhD

degree.

Signature: _____________

Name: Ayaz Bukhari

Page 3: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

3

Plagiarism Undertaking

I solemnly declare that research work presented in the thesis titled “Design of meta-heuristic

computing paradigm for mathematical model of bioinformatics” is solely my research work

with no significant contribution from any other person. Small contribution/help whenever taken

has been duly acknowledged and the complete thesis has been written by me.

I understand the zero-tolerance policy of the HEC and Abdul Wali Khan University, Mardan

towards plagiarism. Therefore, I as an author of the above titled thesis declare that no portion

of my thesis has been plagiarized and any materials used as reference is properly

referred/cited.

I undertake that if I am found guilty of any formal plagiarism in the above titled thesis even

after award of PhD degree, the university reserves the rights to withdraw/revoke my PhD

degree and the HEC and the university has the right to publish my name on the

HEC/University website on which names of students are placed who submitted plagiarized

thesis.

Signature: _____________

Name: Ayaz Bukhari

Page 4: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

4

Certificate of Approval

This is to certify that the research work presented in this thesis, entitled “Design of Meta-

Heuristic Computing Paradigm For Mathematical Model of Bioinformatics” was conducted by

Ayaz Hussain Bukhari under the supervision of Dr. Muhammad Sulaiman. No part of this

thesis has been submitted anywhere else for any other degree. This thesis is submitted to the

Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor

of Philosophy in the field of Mathematics, Department of Mathematics, Abdul Wali Khan

University, Mardan.

Approved by:

Supervisory Committee:

________________________

Dr. Muhammad Sulaiman Supervisor

Department of Mathematics

________________________

Dr. Muhammad Asif Zahoor Raja Co-Supervisor

COMSATS University Islamabad, Attock Campus

________________________

Prof. Dr. Saeed Islam

Convener, BOS & GSC

________________________

Prof. Dr. Aurangzeb

Dean, Faculty of Physical and Numerical Sciences

_________________ _______

Dr. Muhammad. Shoaib Co-Supervisors-I/Member

Department of Mathematics

COMSATS University Islamabad, Attock Campus

Page 5: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

5

Publications

1. Ayaz Hussain Bukhari, Muhammad Suleiman, Saeed Islam, Muhammad Shoaib, poem

Kumar, Muhammad Asif Zahoor Raja. "Neuro fuzzy modeling and prediction of summer

precipitation with application to different meteorological stations", Alexandria Engineering

Journal, (Volume 59, Issue 1, February 2020, Pages 101-116)

2. Ayaz Hussain Bukhari, Muhammad Sulaiman, Saeed Islam, Muhammad Shoaib, pooh

Kumam, Muhammad Asif Zahoor Raja” Design of hybrid NAR-RBFs neural network for

dynamical analysis of nonlinear dusty plasma system” IEEE Access. Volume 59, Issue 1,

Pages 101-116, February 2020

3. Ayaz Hussain Bukhari, Muhammad Sulaiman, Saeed Islam, Muhammad Shoaib,

Muhammad Asif Zahoor Raja “Fractional neuro-sequential paradigm for parametrization

modeling of Stock Exchange variables with Hybrid ARFIMA-LSTM’ Alexandria Engineering

Journal, IEEE ACCESS (10.1109/ACCESS.2020.2985763 April 2020)

Page 6: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

6

Copyright by Ayaz Bukhari

The undersigned here by certify that they have read and recommend to the faculty of

Mathematics for acceptance a thesis entitled “Design of Meta-Heuristic Computing

Paradigm For Mathematical Model of Bioinformatics” by Ayaz Bukhari Reg No:

AWKUM-in partial fulfillment of the requirements for the degree of Doctor of

philosophy.

Signature: _____________

Name: Ayaz Bukhari

Date: Jan, 2020

Page 7: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

7

Dedicated

To

My Late Parents

Without their prayers I am nothing

Page 8: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

8

ACKNOWLEDGEMENT

All Glory be to Allah Who is the creator of the universe, first of all, I am I thankful to my late

parents whose prayers have always been a source of great inspiration for me and teachers

who always guided me, my wife and my children Haider, Ali and Iman, without their prayer,

care, patience and their strong support the work was impossible.

The author wishes to express his deep gratitude to his supervisors, Dr. Muhammad

Sulaiman, Dr. Muhammad Asif Zahoor Raja and Dr. Muhammad. Shoaib for his precious

advices, cooperation and selfless help throughout the entire period of my work. their subtle

ideas, brilliant and advanced research guided me the right direction and compelled me to

extract the best of me in order to complete my journey on a high note

I must extend my appreciation to Dr. Saeed Islam HOD Mathematics Dept AWKUM, who

always remains a source of knowledge, encouraging and motivation throughout at AWKUM.

I want to extend thanks to Dr. Hassan khan , Dr. Hakeem Ullah and Dr. Shakoor Muhammad

for their inspiring instructions and dedicated teaching during the course work of the degree

program at AWKUM.

I also acknowledge the contributions of Honorable Pro Vice Chancellor Prof. Dr. Khurshid

Khan and Dean Faculty of Physical & Numerical Sciences Prof. Dr. Aurangzeb Khan, Abdul

Wali Khan University Mardan, Khyber Pakhtunkhwa, Pakistan, for giving us research facilities

and the adequate research environment which made us able to complete our research in the

best way.

It is honour for me to appreciate the guidance of Dr. Muhammad Asif Zahoor Raja and Dr.

Muhammad. Shoaib for their selfless, valuable, abundant support love and sparing their

precious time spending many nights to make the research work successful and Dr. Ather

Kharal for Scientific guidance and giving expert direction.

Page 9: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

9

I also want to pay my heartiest appreciations to Prof. Dr Saeed Islam Chairman Department

of Mathematics, Abdul Wali Khan University Mardan, for giving the right environment and all

the requirements that were needed.

I am Thankful to Dr.Shaheen Abbas, D. Rashid Kamal Ansari and Dr Abdumalik Rakhimov

for their inspiring instructions and dedicated teaching during the MPhil degree program at

Federal Urdu University Karachi.

I am thankful to Dr Mehdi Hassan HOD Computer Science Dept, Air University Islamabad for

his able guidance in the programming and compiling the result

I am also gratified to Dr Uzair Ahmed HOD Mathematics, The Lahore of University for

encouraging, teaching and guidance.

I am thankful for my colleagues particularly Iftikhar Uddin for helping in the research work.

Lost but not the least, I am also thankful to office assistant Mr Muhammad Yasir office

assistant of Mathematics Dept AWKUM, who has provided all necessary support and

information round the clock when I required.

Page 10: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

10

TABLE OF CONTENTS

1.1 MOTIVATION ........................................................................................................................................ 20

1.2 STATEMENT OF THE RESEARCH PROBLEM ....................................................................................... 20

1.3 RESEARCH OBJECTIVES ..................................................................................................................... 21

1.4 CONTRIBUTION ...................................................................................................................................... 22

1.5 THE ORGANIZATION OF DISSERTATION ....................................................................................................... 23

2.1 INTRODUCTION ................................................................................................................................. 24

2.2 ARTIFICIAL NEURAL NETWORKS ................................................................................................................. 25

2.2.1 FUNDAMENTALS OF ARTIFICIAL NEURAL NETWORKS ................................................................................... 26

2.3 FUZZY LOGIC.......................................................................................................................................... 26

2.4 RADIAL BASIS FUNCTIONS ........................................................................................................................ 28

2.4.1 RADIAL BASIS FUNCTIONS NEURAL NETWORK ............................................................................................ 29

2.5 ARIMA MODEL .................................................................................................................................... 31

2.6 DYNAMIC NONLINEAR AUTOREGRESSIVE NEURAL NETWORK (NAR) ............................................................... 31

2.7 FRACTIONAL DERIVATIVES ........................................................................................................................ 32

2.7.1 DEFINITION 1: GRUNWALD-LETNIKOV ...................................................................................................... 33

2.7.2 DEFINITION 2: MICHELE CAPUTO ............................................................................................................. 33

2.7.3 DEFINITION 3: ATANGANA-BALEANU........................................................................................................ 34

2.7.4 DEFINITION 3: RIEMANN-LIOUVILLE ......................................................................................................... 34

2.7.5 FRACTIONAL TIME SERIES ........................................................................................................................ 35

2.7.6 ARFIMA MODEL .................................................................................................................................. 35

2.8 LSTM MODEL ...................................................................................................................................... 37

2.9 GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK (GRNN) ............................................................. 39

2.10 PROPOSED HYBRID ARFIMA-LSTM MODEL ............................................................................................. 39

3.1 INTRODUCTION ................................................................................................................................. 41

Page 11: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

11

3.1.1 KARACHI .............................................................................................................................................. 45

3.1.2 NAWABSHAH ........................................................................................................................................ 46

3.1.3 HYDERABAD ......................................................................................................................................... 48

3.1.4 CHOR .................................................................................................................................................. 50

3.1.5 BADIN ................................................................................................................................................. 52

ESTIMATED FOR JULY. JOHNSON SB IS FOUND FIT FOR THE PROBABILITY DISTRIBUTION IN AUGUST. .............................. 53

3.2 STRUCTURE OF NARX MODEL .................................................................................................................. 53

3.2.1 LEVENBERG–MARQUARDT (LM) ALGORITHM ............................................................................................ 54

3.2.2 ADVANTAGES OF LM ALGORITHM ............................................................................................................ 55

3.3 EVALUATION CRITERIA ............................................................................................................................ 55

3.4 MODELING WITH NARX ......................................................................................................................... 56

3.4.1 KARACHI ............................................................................................................................................... 57

3.4.2 HYDERABAD ......................................................................................................................................... 61

3.4.4 BADIN ................................................................................................................................................. 70

3.4.5 CHOR .................................................................................................................................................. 75

3.5 CONCLUSIONS ....................................................................................................................................... 81

4.1 INTRODUCTION ................................................................................................................................. 82

4.1 OBJECTIVE OF STUDY .............................................................................................................................. 84

4.1.1 DEFINITION 1: GRUNWALD-LETNIKOV ...................................................................................................... 85

4.1.2 DEFINITION 2: MICHELE CAPUTO ............................................................................................................. 86

4.1.3 DEFINITION 3: ATANGANA-BALEANU........................................................................................................ 86

4.1.4 DEFINITION 4: RIEMANN-LIOUVILLE ......................................................................................................... 86

4.2 FRACTIONAL TIME SERIES ........................................................................................................................ 87

4.3 GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK (GRNN) ............................................................. 88

4.4 STATISTICAL DESCRIPTION OF DATA ................................................................................................ 88

Page 12: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

12

4.5 ARIMA AND ARFIMA MODEL ............................................................................................................... 92

4.5.1 ARIMA MODEL ................................................................................................................................... 92

4.5.2 ARFIMA MODEL .................................................................................................................................. 94

4.6 LSTM MODEL ...................................................................................................................................... 97

4.7 GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK (GRNN) ........................................................... 100

4.8 PROPOSED HYBRID ARFIMA-LSTM MODEL ........................................................................................... 101

4.8.1 EVALUATION CRITERIA ......................................................................................................................... 104

4.9 EXPERIMENTAL RESULT OF ARFIMA-LSTM ............................................................................................. 105

4.10 CONCLUSION ....................................................................................................................................... 110

5.2 VAN DER POL MATHIEU’S EQUATION ..................................................................................................... 117

5.3 DESIGN METHODOLOGY ....................................................................................................................... 118

5.3.1.DYNAMIC NONLINEAR AUTOREGRESSIVE NEURAL NETWORK (NAR) ........................................................... 119

5.3.2. RADIAL BASIS FUNCTIONS (RBFS) ......................................................................................................... 120

5.4 PERFORMANCE INDICES ........................................................................................................................ 139

5.4.1 STATISTICAL TEST ................................................................................................................................ 139

5.5 STATISTICAL ANALYSIS OF VDP-ME ........................................................................................................ 140

5.5.1 SPECIAL NONLINEAR TRANSFORMATION .................................................................................................. 143

5.6 PROPOSED HYBRID RBF-NAR MODEL .................................................................................................... 144

5.7 EXPERIMENTAL RESULTS........................................................................................................................ 145

5.8.1.ANALYSIS OF MULTIPLE INDEPENDENT TRAIL ............................................................................................. 152

6.1 SUMMARY .......................................................................................................................................... 175

6.2 CONCLUSION ....................................................................................................................................... 176

REFERENCES ................................................................................................................................................ 178

Page 13: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

13

LIST OF TABLES

TABLE 3.1 LOCATION OF SYNOPTIC STATIONS IN SINDH, PAKISTAN ......................................................................... 44

TABLE 3.2 PROBABILITY DISTRIBUTION OF RAINFALL IN KARACHI............................................................................. 45

TABLE 3.3 PROBABILITY DISTRIBUTION OF RAINFALL IN NAWABSHAH ...................................................................... 47

TABLE 3.4 PROBABILITY DISTRIBUTION OF RAINFALL IN HYDERABAD ........................................................................ 49

TABLE 3.5 PROBABILITY DISTRIBUTION OF RAINFALL IN CHOR ................................................................................. 51

TABLE 3.6 PROBABILITY DISTRIBUTION OF RAINFALL IN BADIN ................................................................................ 52

TABLE 3.7 RESULT OF NARX FOR THE MONTH OF JUNE ........................................................................................ 57

TABLE 3.8 RESULT OF NARX FOR THE MONTH OF JULY ........................................................................................ 58

TABLE 3.9 RESULT OF NARX FOR THE MONTH OF AUGUST ................................................................................... 60

TABLE 3.10 RESULT OF NARX FOR THE MONTH OF JUNE ...................................................................................... 62

TABLE 3.11 RESULT OF NARX FOR THE MONTH OF JULY ...................................................................................... 63

TABLE 3.12 RESULT OF NARX FOR THE MONTH OF AUGUST ................................................................................. 65

TABLE 3.13 RESULT OF NARX FOR THE MONTH OF JUNE ...................................................................................... 66

TABLE 3.14 RESULT OF NARX FOR THE MONTH OF JULY NAWABSHAH ................................................................... 68

TABLE 3.15 RESULT OF NARX FOR THE MONTH OF AUGUST ................................................................................. 69

TABLE 3.16 RESULT OF NARX FOR THE MONTH OF JUNE ...................................................................................... 71

TABLE 3.17 RESULT OF NARX FOR THE MONTH OF JULY ...................................................................................... 73

TABLE 3.18 RESULT OF NARX FOR THE MONTH OF AUGUST ................................................................................. 74

TABLE 3.19 RESULT OF NARX FOR THE MONTH OF JUNE ...................................................................................... 75

TABLE 3.20 RESULT OF NARX FOR THE MONTH OF JULY ...................................................................................... 77

TABLE 3.21 RESULT OF NARX FOR THE MONTH OF AUGUST ................................................................................. 80

TABLE 4.1 STATISTICAL DESCRIPTION OF FFC OPEN PRICE WITH THE DEPENDENT VARIABLE ......................................... 92

TABLE 4.2 PARAMETER ESTIMATION RESULT ARFIMA(1,D,3) FOR FFC COMPANY ................................................... 96

Page 14: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

14

TABLE 4.3 FORECAST STATISTICS USING ARIMA, ARFIMA AND HYBRID ARFIMA-LSTM 3. THE FFC ...................... 106

TABLE 4.4 THE FFC FORECAST RESULTS USING ARIMA, ARFIMA AND HYBRID ARFIMA-LSTM ............................. 107

TABLE 5.1 STATISTICAL DESCRIPTION OF VDP-ME EQUATION ............................................................................. 140

TABLE 5.2 STATISTICAL DESCRIPTION OF BI-MODEL VDP-ME ............................................................................. 142

TABLE 5.3 PROBABILITY BASED PROPOSED TRANSFORMATION ............................................................................. 144

TABLE 5.4 NAR MODEL PERFORMANCE N=1000 AND D = 1 ............................................................................... 147

TABLE 5.5 ANALYSIS OF VARIANCE (ONE-WAY ANOVA) .................................................................................... 153

TABLE 5.6 GROUPING INFORMATION USING THE TUKEY METHOD ....................................................................... 153

TABLE 5.7 TUKEY SIMULTANEOUS TESTS FOR DIFFERENCES OF MEANS ................................................................. 153

TABLE 5.8 NAR-RBFS MODEL FOR SCENARIO 4 CASE 3 FOR VDP-ME ................................................................ 154

TABLE 5.9 MOMENTS ANALYSIS (ST- DEV & VARIANCE) FOR THE PROPOSED VDP-ME ........................................... 156

TABLE 5.10 MOMENTS ANALYSIS (KURTOSIS & RMSE) FOR THE PROPOSED VDP-ME ........................................... 157

TABLE 5. 11 CONVERGENCE ANALYSIS OF PROPOSED NAR-RBFS MODELS ............................................................ 162

Page 15: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

15

LIST OF FIGURE:

FIG 2. 1 BACK PROPAGATION NEURAL NETWORK ................................................................................................. 29

FIG 2. 2 ARCHITECTURE RBFS NETWORK ........................................................................................................... 30

FIG 2. 3 STRUCTURE OF NAR MODEL SYSTEM ..................................................................................................... 32

FIG 2. 4 STRUCTURE OF RNN NEURAL NETWORK ............................................................................................... 38

FIG 2. 5 LSTM NEURAL NETWORK STRUCTURE .................................................................................................. 39

FIG 3. 1 LOCATION OF THE GAUGING STATIONS OF SINDH, PAKISTAN ...................................................................... 46

FIG 3. 2 ANNUAL RAINFALL AT KARACHI (1971-2012) ........................................................................................ 47

FIG 3. 3 PROBABILITY DENSITY FUNCTION FOR THE MONTH OF (A) JUNE, (B) JULY AND (C) AUGUST FOR KARACHI .......... 48

FIG 3. 4 ANNUAL RAINFALL AT NAWABSHAH (1971-2012) ................................................................................. 49

FIG 3. 5 PROBABILITY DENSITY FUNCTION FOR THE MONTH OF (A) JUNE, (B) JULY AND (C) AUG FOR NAWABSHAH ......... 50

FIG 3. 6 ANNUAL RAINFALL AT HYDERABAD (1971-2012). .................................................................................. 51

FIG 3. 7 PROBABILITY DENSITY FUNCTION FOR THE MONTH OF (A) JUNE, (B) JULY AND (C) AUG FOR HYDERABAD ........... 52

FIG 3. 8 MONTHLY MEAN RAINFALL CHOR (1971-2012) ..................................................................................... 52

FIG 3. 9 PROBABILITY DENSITY FUNCTION FOR THE MONTH OF (A) JUNE, (B) JULY AND (C) AUG FOR CHOR ................... 53

FIG 3. 10 MONTHLY MEAN RAINFALL BADIN (1971-2012) .................................................................................. 53

FIG 3. 11 PROBABILITY DENSITY F ESTIMATED FOR JULY. JOHNSON SB IS FOUND FIT FOR THE PROBABILITY DISTRIBUTION IN

AUGUST. .............................................................................................................................................. 55

FIG 3. 12 STRUCTURE OF NARX MODEL FOR THE RAIN FORECASTING ..................................................................... 58

FIG 3. 13 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 60

FIG 3. 14 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JUNE AT KARACHI .................................. 61

FIG 3. 15 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 61

FIG 3. 16 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JULY AT KARACHI .................................. 61

FIG 3. 17 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 62

Page 16: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

16

FIG 3. 18 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JUNE AT KARACHI .................................. 63

FIG 3. 19 COMPARISON OF FORECASTING NARX AND ARIMA MODELS ................................................................. 63

FIG 3. 20 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 64

FIG 3. 21 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JUNE AT HYDERABAD ............................. 65

FIG 3. 22 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 65

FIG 3. 23 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JULY AT HYDERABAD .............................. 66

FIG 3. 24 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 67

FIG 3. 25 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF AUGUST AT HYDERABAD ........................ 67

FIG 3. 26 COMPARISON OF FORECASTING NARX AND ARIMA MODELS ................................................................. 69

FIG 3. 27 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 70

FIG 3. 28 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JUNE AT NAWABSHAH ........................... 70

FIG 3. 29 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JULY AT NAWABSHAH ............................ 71

FIG 3. 30 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 72

FIG 3. 31 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 72

FIG 3. 32 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL OF JUNE AT NAWABSHAH ........................... 73

FIG 3. 33 COMPARISON OF FORECASTING NARX AND ARIMA MODELS ................................................................. 74

FIG 3. 34 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 75

FIG 3. 35 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL IN JUNE AT BADIN ...................................... 75

FIG 3. 36 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 76

FIG 3. 37 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL IN JULY AT BADIN ...................................... 77

FIG 3. 38 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 78

FIG 3. 39 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL IN AUGUST AT BADIN ................................. 78

FIG 3. 40 COMPARISON OF FORECASTING NARX AND ARIMA MODELS ................................................................. 79

FIG 3. 41 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 80

FIG 3. 42 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL IN JUNE AT CHOR....................................... 80

Page 17: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

17

FIG 3. 43 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 82

FIG 3. 44 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL IN JULY AT CHOR ....................................... 82

FIG 3. 45 REGRESSION ANALYSIS OF NARX FOR TRAINING, VALIDATION AND TEST SETS ............................................. 83

FIG 3. 46 REGRESSION FIT BETWEEN OBSERVED AND PREDICTED RAINFALL IN AUGUST AT CHOR .................................. 83

FIG 3. 47 COMPARISON OF FORECASTING NARX AND ARIMA ............................................................................. 84

FIG 4. 1 FRACTIONAL ORDER REPRESENTATION OF FUNCTION F(X)=X2 .................................................................... 88

FIG 4. 2 GRAPHICAL REPRESENTATION OF FFC DAILY DATA 2009-2018 ................................................................ 90

FIG 4. 3 PROBABILITY DISTRIBUTION OF FFC OPEN PRICE ..................................................................................... 90

FIG 4. 4 PERCENTILE GAUSSIAN FIT OF FFC OPEN PRICE ....................................................................................... 91

FIG 4. 5 SEASONAL PLOT OF FFC COMPANY FROM 2009-2018 ............................................................................ 91

FIG 4. 6 GRAPH OF DEPENDENT VARIABLES USED IN THE MODELING. ...................................................................... 92

FIG 4. 7 ARIMA RESIDUAL PLOT AND ITS ACF AND PACF LAG PLOT OF FFC OPEN PRICE. ......................................... 95

FIG 4. 8 ARFIMA RESIDUAL OF OPEN PRICE FFC COMPANY FROM 2009-2018 ..................................................... 98

FIG 4. 9 STRUCTURE OF RNN NEURAL NETWORK ............................................................................................... 98

FIG 4. 10 OVERALL GRAPHICAL ABSTRACT OF THE PROPOSED TECHNIQUE, ARFIMA-LSTM FOR MODELING OF FFC OPEN

PRICE ................................................................................................................................................... 99

FIG 4.11 HYBRID LSTM NEURAL NETWORK STRUCTURE .................................................................................... 101

FIG 4.12 THE ARCHITECTURE OF GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK ................................... 102

FIG 4.13 HYBRID LSTM MODEL OF FFC DATA OPEN PRICE WITH SEQUENTIAL CORRELATION .................................... 103

FIG 4.14 LSTM MODEL FITTING OF RESIDUAL OF ARFIMA MODEL FFC OPEN PRICE ............................................... 106

FIG 4.15TRAINING AND TESTING ERROR LSTM MODEL RESIDUAL OF FFC OPEN PRICE. ........................................... 107

FIG 4.16 GRNN ARCHITECTURE FOR PREDICTION OF FFC OPEN PRICE .................................................................. 107

FIG 4.17 GRAPHICAL COMPARISON OF FFC FORECAST RESULTS USING ARIMA, ARFIMA, GRNN AND HYBRID ARFIMA-

LSTM ................................................................................................................................................ 110

FIG 4.18 GRAPHICAL COMPARISON OF MAE ERROR FFC OPEN PRICE FORECAST .................................................... 110

Page 18: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

18

FIG 4.19 PARAMETRIC COMPARISON OF MAE ERROR FFC OPEN PRICE FORECAST .................................................. 111

FIG 5. 1 STRUCTURE OF NAR MODEL SYSTEM ................................................................................................... 122

FIG 5. 2 PROPOSED METHODOLOGY NAR-RBF-NN FOR NONLINEAR DUSTY PLASMA MODELS .................................. 123

FIG 5. 3 STRUCTURE OF NAR NEURAL NETWORK MODEL .................................................................................... 136

FIG 5. 4 BACK PROPAGATION NEURAL NETWORK ............................................................................................... 136

FIG 5.5 ARCHITECTURE RBFS NETWORK .......................................................................................................... 136

FIG 5.6 NORMAL PROBABILITY DISTRIBUTION OF VDP-ME ................................................................................. 141

FIG 5.7 BI-MODEL DISTRIBUTION OF VDP-ME PROBABILITY ............................................................................... 142

FIG 5.8 PROBABILITY DISTRIBUTION OF BI-MODEL VDP-ME ............................................................................... 142

FIG 5.9 CDF & PDF DISTRIBUTION OF BIMODAL VDP-ME ................................................................................. 145

FIG 5.10 STRUCTURE OF PROPOSED NAR MODEL ............................................................................................. 146

FIG 5.11 NAR MODEL RESPONSE DUSTY PLASMA EQUATION ............................................................................... 146

FIG 5.12 TRAINING. TESTING VALIDATION OF NAR MODEL DUSTY PLASMA EQUATION. ........................................... 147

FIG 5.13 BEST VALIDATION ERROR OF NAR MODEL EQUATION ........................................................................... 148

FIG 5.14 RESIDUAL OF NAR MODEL FIT ........................................................................................................... 148

FIG 5.15 STRUCTURE OF RBFS NEURAL NETWORK MODEL .................................................................................. 149

FIG 5.16 BEST VALIDATION PERFORMANCE OF RBFS MODEL ............................................................................... 149

FIG 5.17 TRAINING, VALIDATION AND TEST ERROR OF RBFS MODEL ..................................................................... 149

FIG 5. 18 MODEL RESIDUAL PLOT OF NAR-RBFS NEURAL NETWORK ................................................................... 150

FIG 5.19 COMPARISON OF RESULT OBTAINED FROM NAR- RBFS MODELS WITH EXACT FOR SCENARIO 1 AND 2 .......... 161

FIG 5. 20 COMPARISON OF RESULT OBTAINED FROM NAR- RBFS MODELS WITH EXACT SOLUTION SCENARIO 3 .......... 162

FIG 5. 21 COMPARISON OF RESULT OBTAINED FROM NAR- RBFS MODELS WITH EXACT SOLUTION SCENARIO 4 CASE 2 164

FIG 5. 22 COMPARISON OF RESULT OBTAINED FROM NAR- RBFS MODELS WITH EXACT SOLUTION SCENARIO 4 CASE 2 164

FIG 5. 23 COMPARISON OF CDF& PDF FOR PROPOSED NAR-RBFS MODEL WITH EXACT SOLUTION FOR SCENARIO 1 . 167

FIG 5. 24 COMPARISON OF CDF& PDF FOR PROPOSED NAR-RBFS MODEL WITH EXACT SOLUTION FOR SCENARIO 3 . 169

Page 19: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

19

FIG 5. 25 COMPARISON OF CDF& PDF FOR PROPOSED NAR-RBFS MODEL WITH EXACT SOLUTION FOR SCENARIO 3 . 170

FIG 5. 26 PARAMETRIC COMPARISON PROPOSED NAR-RBF MODEL WITH EXACT SOLUTION FOR SCENARIOS 1-4 ....... 172

FIG 5. 27 PARAMETRIC COMPARISON PROPOSED NAR-RBF MODEL WITH EXACT SOLUTION FOR SCENARIOS 2 AND 3. 173

Page 20: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

20

Chapter 1

_____________________________________________________

1.1 MOTIVATION

Bioinformatics based numerical computing solvers belong to the class of interdisciplinary field

with inherent complexity in nature involves combine efforts in terms of development and

implementation in the domain of biology, mathematics, statistics and computer science [1].

Nonlinearity, limited measurable dynamics and complex nature of biological systems [2] make

them interesting to design optimization solvers based on mathematical modeling. The

bioinformatic systems are generally represented with multi-model’s architecture [3-6] having

many local extrema’s due to which local search methodologies fail rather more. Therefore, for

mathematical models arising in bioinformatics, one need to explore the global search

schemes for reliable investigation of their dynamics. Besides nonlinearity of the model, the

other features including high-dimensionality make the computing task more rigorous and

complex [7]. To achieve the ideal and faster results,[8] new algorithm and techniques are

being explored. As a result, the use of metaheuristics and other bio-inspired techniques are

continuous developing [9] to solve nonlinear equations arise due to advancement in

bioinformatics. In short, metaheuristic algorithms arise as a top-level general strategy [10]

which guide the heuristic in less computational time than the classical algorithms. Radial

basis function network with their dynamic architecture have extra-ordinary potential [11] to

design multimodal behavior of complex system with fractional derivatives and deep learning

techniques.

1.2 STATEMENT OF THE RESEARCH PROBLEM

Metaheuristic Computing Paradigm is introduced for the nonlinear chaotic time series and

different nonlinear systems particularly in metrology, finance and bioinformatics for which the

Page 21: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

21

conventional methodologies such as Runge-Kutta and numerical methods etc. have

computing difficulties as well as to handle the associated constrained. The differential

equation from bioinformatics is transformed into optimized problem by use of approximation

theory in mean squared error sense. The radial basis and other transcendental function are

for the formulation of stochastic numerical technique for the stochastic differential equation.

Besides nonlinearity of the model, the other features including high-dimensionality make the

computing task more rigorous and complex and increased the challenging task of modeling

the complex behavior of chaotic nonlinear systems.

1.3 RESEARCH OBJECTIVES

The objectives of the study are:-

(a) Design of Meta-Heuristic computing paradigm on mathematical modeling of

bioinformatic models.

(b) Investigate the strength of fractional derivative with hybrid modeling of neural

network to paradigm fast fluctuated, complex and high frequency data.

(c) To developed a novel ARFIMA-LSTM hybrid recurrent network with the strength

of fractional order derivative.

(d) To analyze the time series data and identify the nature of phenomenon in the

sequence of observation and study the pattern by using bio-inspired heuristics.

(e) To identify, intensify and to capture local features of noisy irregular dynamic

behavior and chaotic nature of nonlinear system governed with VDP-ME using

radial basis NNs.

(f) To design and forecast nonlinear time series model and predict future values on

the bases of pattern identified.

Page 22: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

22

1.4 CONTRIBUTION

The innovative contributions of designed hybrid neurocomputing approach are presented in

terms of following salient features: -

(a) The nonlinear autoregressive network with exogenous inputs approach provides

accurate and reliable results to model nonlinear systems.

(b) The hybrid ARFIMA model handle linear tendencies better than ARIMA model in the

data.

(c) The proposed fractional hybrid paradigm, provides flexible tool for classes of long-

memory model.

(d) The proposed hybrid models minimize the over fitting problem of neural network

besides optimizing the volatility problem.

(e) A new family of deep learning with the strength of hybrid artificial neural in the form of

Nonlinear Autoregressive Radial Basis Functions (NAR-RBFs) neural network model is

presented for initial value problem of bi-model VDP-ME.

(f) A class of new transformation are introduced to ensure convergent and reduction in

search time to improve the efficiency smooth, functionality and parametric

computations.

(g) The competency of proposed hybrid neural network model is endorsed in terms of

accuracy, stability, fast convergence, less sensitivity and dynamic consistency in

characteristics for variant chaotic systems.

(h) The method’s extendibility provides building of generalized framework in modeling of

higher order ODEs and PDEs solutions with application beyond Nano-technology,

especially in modeling of stiff scenarios.

Page 23: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

23

1.5 THE ORGANIZATION OF DISSERTATION

The dissertation consists of six chapters. The chapter 1 is devoted to the historical

brief and overview of artificial intelligence algorithms. In chapter 2, we have provided

literature review and some useful preliminaries along with existing neural networks

methodologies. In chapter 3, nonlinear autoregressive network with exogenous

inputs (NARX) model for a time series is analyzed to evaluate the pattern of

precipitation. In chapter 4, modeling of fast fluctuated and high frequency financial

data is presented. In chapter 5, design of hybrid meta-heuristic computing paradigm

for mathematical modeling of VDP-ME based system is presented. In chapter 6, we

have provided the brief summary of findings along with conclusions and further

related studies.

Page 24: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

24

CHAPTER 2

LITERATURE REVIEW

________________________________________________________________

2.1 INTRODUCTION

This chapter presents the basic explanation of Artificial Neural Networks (ANNs). The

significance of ANNs in the scientific development and feedforward back propagation.

Modern philosophy and science, researchers often considered the Intelligence as one of the

outstanding achievements rediscovered in the 20th century. Artificial intelligence and artificial

life are a perfect example of this integration of many scientific fields. The main methods of the

study of artificial life are: synthesis of artificial systems with similar behavior of living systems,

the study of the dynamics of the process, not the end result, the construction of systems

exhibiting the phenomenon of creation. artificial intelligence is the property of automatic

systems to take on certain functions of human intelligence .ANN artificial intelligence is a

decision-making model, carried out natural human intellect. Artificial intelligence can claim a

comparison with a natural, provided that the quality of the solution is not worse than the

average natural intelligence. Neurobiology and neuro anatomy made significant progress.

Carefully studying the structure and function of the human nervous system, they understand

much about the “wiring” of the brain. In the process of gathering of knowledge revealed that

the brain has a stunning complexity. Hundreds of billions of neurons, each of which is

connected to hundreds or thousands of others, form a system, far beyond our wildest dreams

of supercomputers. However, the brain gradually gives its secrets in the process of one of the

most intense and ambitious research in the history of mankind. A better understanding of the

functioning of the neuron and his connections allowed the researchers to create a

mathematical model to test their theories. Experiments can now be carried out on a digital

computer without involving humans, In neural modeling: the first to understand the functioning

Page 25: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

25

of the human nervous system at the level of physiology and psychology, and the second to

create computing systems (Artificial Neural Networks), perform functions similar to the

functions of the brain.

2.2 ARTIFICIAL NEURAL NETWORKS

Artificial neural networks are mathematical models developed by using the principles of the

biological function of human Nervous system. In 1943 W. McCulloch [12] and W. Pitts

proposed first mathematical models of neuron. McCulloch and Pitts showed that a network

composed of such neurons can perform calculations like the programmable digital computer.

In a sense, the network contains a “code” that controls the computing process However, real

neurons have a number of specific characteristics that distinguish them from the simplified

model presented by McCulloch and Pitts. Real number of neurons in the neurotransmitter

substance flowing into the synaptic cleft can vary in an unpredictable manner. One of these

models have been the most fruitful, was the model by D. Hebb [13] in 1949. He proposed law

training, which was the starting point for the learning algorithms of artificial neural networks.

Complemented by a variety of other methods. The decade since the late 50s and before the

beginning of the 70s. can be called the first golden age of neural networks. During this period,

F. Rosenblatt [14] with colleagues studied a special type of ANN called perceptron, as they

saw it as a simplified model of a biological mechanism for determining the sensory

information. It is the simplest form of perceptron composed of two different layers of neurons,

which are the input and output layers, respectively. Note that even a simple perceptron with a

technical point of view is a three-layer device, as the layer of sensory cells located in front of

the first layer of computational neurons, which are called input. Neurons in the output layer

receive signals from the synaptic neurons in the input layer. The second period was the

1960s when the Frank Rosenblatt motivated by his work investigated the calculations of the

observations which led to the first genesis neural network known as the perceptron

convergence theorem and the perceptron .Perceptron has been used for such a wide range

Page 26: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

26

of problems like weather prediction, the analysis of electrocardiograms and artificial vision.

The theory of artificial neural networks is developing rapidly, but at the moment it is not

enough to be a support for the most optimistic projects. The current explosion of interest in

neural networks has attracted thousands of researchers. It is reasonable to expect a rapid

growth in our understanding of artificial neural networks, leading to improved network

paradigm.

2.2.1 FUNDAMENTALS OF ARTIFICIAL NEURAL NETWORKS

Artificial neural networks are extremely diverse in their configurations It resembles the brain in

two respects. A neuron is an information processing unit which is fundamental to the

operation of the neural network as given below: -

(a) A set of synapses-links each of which is characterized by a weight-force.

Specifically, a variable signal xj that is connected to the k-neuron multiplied by the

weight of w kj. It should be explained that the first index refers to the neuron that

owns the weight and the second index refers to the index of the variable from the

input vector to the neuron as given below: -

(b) . A summing node ∑ which adds incoming signals after they have been weighted

by the weights of the synapses of the neuron. The acts described to this point

make a linear combination.

(c) . An activation function which restricts-normalizes the amplitude of the field of the

output of a neuron in a finite field in the interval [0,1].

Synoptic weight and activation function of neuron in ANN is shown in figure 2.1.

Page 27: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

27

2.3 FUZZY LOGIC

Fuzzy logic is a mathematical tool that applied for modeling of system. it replaced

mathematical descriptions using quantitative data. Linguistic terms are used to approximate

the amount or quality of data

Fuzzy concept behind the mathematics is easy to understand, since the concept behind fuzzy

logic is simple. Fuzzy logic is judgment technique without far-reaching difficulties. Fuzzy logic

can capture the system in a logical manner. It’s easy to layer on more functionality without

beginning again from the beginning. Fuzzy logic indulgent of inaccurate data. Fuzzy

reasoning embeds this understanding in the process rather than to attach it to the end.

Nonlinear functions can be fit into model by fuzzy logic. Fuzzy logic can be built on top of the

professional experience. Fuzzy logic can be combined with traditional methods. Fuzzy logic is

based on natural language. The language of human being is the root of fuzzy logic. Fuzzy

logic is based on the qualitative descriptions that use the common language, the term fuzzy

come first in 1965, when Professor Lotfi Zadeh [15] of University of California, Berkeley,

United States, published a paper entitled “Fuzzy sets”. In which he defined the fuzzy set as a

set, whose boundaries are not precise and its membership function was not two value logic

like in Aristotelian two value login but it has a membership values whose value changes from

zero to one. Lotfi Zadeh noticed that that the logic of a conventional computer could not

manipulate the data, which are subjective or vague ideas, so he defined the fuzzy logic that

allows a computer to finds that representation of vague idea, similar to human thinking and

reasoning. Since then he has achieved many major theoretical breakthroughs in this field and

many researches has been quickly joined him by renown research scholar developing

theoretical works. Meanwhile few researchers concentrated on solving fuzzy logic problems

which were considered complex. In 1975 professor Mamdani from London developed a

strategy for procedure to control and published the confident results from inspiring the work of

Lotfi A. Zadeh he had gained in control of a steam motor. In 1978 the Danish company, F.L.

Page 28: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

28

Smidth, [ 16] achieved the control of a cement kiln. It was the first revolutionary application of

fuzzy logic in industry. The different applications in electrical appliances were mainly reason

for the interest. Washing machines not needed modification, camcorders with steady-shot

image stabilization and much other application brought the term “fuzzy logic” to the attention

of researchers. The fuzzy logic techniques are applied [17] in all fields of life particularly in

social science, management, engineering and biological and medical sciences and in aviation

technology.

. If we look deeper into thing we find that things are not precise, the reasoning behind the

Fuzzy logic is based on process instead of taking it to end. The logic behind fuzzy reasoning

is based on capturing the process rather than taking it to the end. The advantage of Fuzzy

logic is that it is capable of modeling all non-linear functions [18] with different complexity.

Fuzzy logic can be combined with conventional methods. Fuzzy logic is based on the

qualitative descriptions that use in human language.

2.4 RADIAL BASIS FUNCTIONS

Radial basis functions (RBF) are real valued function which depend only on distance ( )x

radial function distance from origin is defined a ( ) ( )x x and the alternate distance from

center point ix is ( , ) ( )

i ix x x x . RBF was extended for solution of ODEs including

nonlinear Klein Gordon equation [19] and higher order ODEs [20] and second order parabolic

equation with boundary condition. Commonly used as distance-based function related with

RBF are given below: -

Gaussian (GA):2( )( ) r

r e where r = ir x x

Multiquadric (MQ):2

( ) 1r r

Page 29: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

29

Inverse quadratic (IQ) : 2

1( )

1r

r

Inverse multiquadric (IMQ):

2

1( )

1r

r

Thin plate Spline (TPS):2( ) lnr r r

2.4.1 RADIAL BASIS FUNCTIONS NEURAL NETWORK

The one-dimension modeling by radial basis function network [21] represented as follows:

1

( ) ( ) ( ), 1,2,.....,m

ji j

j

y x f x w h t i n

(2.1)

Where w represents output layer weights and y is network output and n are number of

neurons. Radial Basis function neural network consists of three layers, as shown in figure

2.2 in which first layer is input layer, second is hidden layer and third is outer layer.

Page 30: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

30

Fig 2.1 Back propagation neural network

Fig 2.2 Architecture RBFs Network

The transfer functions used in the first layer of the RBF network are different than the

sigmoid functions generally used in the hidden layers of multilayer perceptron (MLP). We

will consider only the Gaussian RBFs as neurons in the hidden layer as activation function.

The array of computing units is represented by hidden nodes of vector c which is

parametric vector of input x vector size. The Euclidean distance between input vector and

center c is defined as:

Page 31: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

31

( ) ( )i

d x t c t

(2.2)

The output from hidden layer is produced RBFs nonlinear Gaussian activation function

calculated as:

2

( ) 2

( ) ( )exp( ), 1,2,...

2

i

j t

j

x t c th j m

a

(2.3)

Where ja is a scalar positive width and m represent number of hidden nodes.

The output layer is combination of linear weight described as follows:

1

( ) ( ), 1,2,...,m

i ji j

j

y t w h t i n

(2.4)

2.5 ARIMA MODEL

The mathematical representation of ARIMA Model was first introduced by Box and jekin’s

[22] in his book in 1970 to forecast the future trend representing by the equations as:

1 1 2 2

1 1 2 2

1

1 1

...

....

t t t p t p

t t q t q

qP

t t k l t l

k l

x c x x x

x c x

(2.5)

Where 1( ) 1 ... p

pB B B

And 1( ) 1 ... p

pB B B

are polynomial in B and ( 1,2,..., )i i p and ( 1,2,..., )i i q are autoregressive and

moving average parameters i is representing white noise with mean zero and variance

Page 32: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

32

2 and such a time series with white noise not depending on their own previous terms but

also depend on other phenomena and other variables[30].

2.6 DYNAMIC NONLINEAR AUTOREGRESSIVE NEURAL NETWORK (NAR)

Dynamic Nonlinear auto regressive model (NAR) used for regression, Interpolation and

prediction [23] of discrete time series y(t) at time t, in high variance and sporadic behavior the

nonlinear approach is followed.

A nonlinear autoregressive neural network is discrete model consist of input layer, input

delay, hidden layer, output layer and output delay as shown in figure 2.4 and approximated

as follow:

( ) [ ( ), ( 1), ( 2),

... ( ), ( 1), ( 2)... ( )]

x k x k x k x k

x k t y k y k y k p

(2.6)

( ) ( ( ) )I I IN N N

o k f x k w b (2.7)

2 2 2( ) ( ( ) )

N N Ny k f o k w b

(2.8)

Fig.2.3: Structure of NAR model system

Page 33: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

33

2.7 FRACTIONAL DERIVATIVES

The concept fractional order derivative emerged back in 1695 with famous

correspondence between L’Hopital and Leibniz [24] about possibility of fractional

order derivatives. The first application of Fractional order mathematics contributed by

Abel [25] in 1823 who solved tautochrone integral order problem with the fractional

order derivative of half order. The application of Fractional order differential equation

has introduced new concept and techniques in financial market forecasting. Modeling

with fractional order and Adomian decomposition method was introduced by Song

et.al, [26] with application in approximate semi analytical solution of European price

model and China’s financial market. Biologist deducted that biological organism have

fractional order electric conductivity in their cell membrane [27], which is classified as

noninteger group models. The Kumar et.al. [28] proposed to estimate coefficients of

fractional order differential equation based on Gunwald fractional derivative by least

square method. Different definitions of fractional order derivative have been

presented in the literature, some on significant definitions are expressed as: -

2.7.1 DEFINITION 1: GRUNWALD-LETNIKOV

Grunwald-Letnikov [29] presented generalized form of fractional order using binomial

expansion.

[(x a)/h]

00

1(t) lim ( 1) (t jh)

j

a th

j

aD f f

jh

(2.9)

Where a

j

is binomial coefficient and a is constant order, which can express by Euler’s

Gamma function defined as follow :-

Page 34: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

34

( 1)

(j 1) ( j 1)

a

j

(2.10)

2.7.2 DEFINITION 2: MICHELE CAPUTO

Michele Caputo [30] defined fractional order by applying integral equation as follow: -:

(n)

1

1 (

(n

)(t)

( )) t

t

c

a t n

a

fD f d

(2.11)

where is real number and n is an integer? Grunwald-Letnikov definition is identical to

Caputo definition for fractional derivative except in case of constant function for which Caputo

derivative is zero, while Riemann-Liouville derivative of constant is a non-zero value.

2.7.3 DEFINITION 3: ATANGANA-BALEANU

The left Atangana-Baleanu [31] definition in term of fractional derivate for the interval 0<α<1

in Sobolev space is defined by:-

0

( )(h)(x) (s) [ (x ) ]ds

1 1

xB

T h E s

(2.12)

where 1(0,1)h H in Sobolev space, ( ) 0B is function in normalized form satisfying the

condition : (0) (1) 1B B and E is Mittag-Leffler function of single variable.

2.7.4 DEFINITION 3: RIEMANN-LIOUVILLE

Riemann-Liouville [32] used derivatives instead of integral order to defined fractional order

derivatives defined as: -

Page 35: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

35

11(t) [ (t ) ( ) ]

(n )

tnc n

a t n

a

dD f f d

dx

(2.13)

fractional derivative by using definition of Riemann-Liouville in term of gamma function is

defined as

(m 1)

(m q 1)

qm m q

q

dx x

dx

(2.14)

2.7.5 FRACTIONAL TIME SERIES

Fractional Time series was developed by Harold Hurst [33] while calculating optimal dam size

for the river Nile which was directly linked with fractional dimension of the dam .Consider d as

periodic time duration over the range R, which is calculated by differencing of largest and

smallest deviation encounted during d time interval which can be represented as:-

R dH

where H is the Hurst exponent varying from zero to one and the higher value of Hurst

component was represented with smaller size of curve.

2.7.6 ARFIMA MODEL

ARFIMA (p,d,q) model define d for any real number using binomial expansion and Gamma

function as

0 0

( 1)(1 ) ( ) ( )

( 1)( 1 )

d j j

j j

d dB B B

j j d j

(2.15)

where 1/ 2 1/ 2d

Page 36: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

36

Shaofei et.al[34] and many other authors [35] suggest the use of Fractional ARIMA instead of

an integer one can improve forecasting. The general form of ARFIMA (p,d,q) process defined

as

( )(1 ) ( )d

t tB B X B (2.16)

where 1/ 2 1/ 2d

The above model widely used for LRD and SRD time series [36] In ARFIMA (p,d,q) p is

autoregressive order, q is moving average order and d is differencing in decimal form. The

ARFIMA (p,d,q) process is generalized form of ARIMA process as for integer value of d the

ARFIMA model shift to ARIMA model. Many non-stationary time series contain nonlinear

trend and removing the trend is the first step of modeling of such time series. Box-Jekins

theory served as a filter point to separate signals from the noise. In the residual of ARIMA

model in the 7 we may notice a pattern of fractional correlation that commence with first lag.

In such condition fractional differences are useful to capture non-linearity by applying binomial

expression to estimate ARFIMA(p,d,q) parameters. By applying fractional order difference

filter, the residual obtain is uncorrelated with lags of its variables. Mandelbrot38

suggested the

use of range over standard deviation R/S statistics called “rescaled range”, which used by

hydrologist Harold Hurst39

in the Hurst exponent produced. The main concept of R/S analysis

is to analyze rescaled cumulative deviation from the mean. The first estimation of Range R is

given by:

1,2,... 1,2,...

1 1

max ( ) min ( )n n

n m n j m n j

i i

R Y Y Y Y

(2.17)

where n

R is range of Accumulated deviation defined over period n of Y. The standard

deviation n

S is defined as

Page 37: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

37

2 1/2

1

[ ( ) ]n

n j

i

S Y Y

(2.18)

with the increase in n it holds the equation

log[ / ] log logn n

R S H n (2.19)

Which reflects linearity in estimation of Hurst slope H. In the ARFIMA model the intensity d of

fractional Gaussian noise of the data is estimated with maximum likelihood Hurst Parameter

defined as:-

1/ 2d h (2.20)

The relationship permits researchers to define certain boundaries to some limit as follow: -

(a) if d=0 the process does not contain long term memory and is stationary.

(b) if 0<d<1 the process is persistent with long term memory.

(c) If d=0.5 the process represents random walk and unpredictable.

Estimation of d in financial data series is different from 0 and 0.5.

2.8 LSTM MODEL

Neural networks are efficient to extract nonlinear features for long memory data because of

its versatility and use of nonlinear activation functions in each layer. Kumarasinghe et.al[37]

designed Long Short-Term Memory (LSTM) network for intelligent prediction of Colombo

Stock Exchange. To understand the working of LSTM model, consider the RNN mechanism

which is sequential model that performs effectively by sequencing time series data as a input

vector and provide vector output by neural network structure in the model’s cell as shown in

Figure 2.5. The time series data passed through cell in sequential vector, at each step the cell

Page 38: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

38

output value of cell is concatenated with next time step data and the output value of cell serve

as input for the next time step. The process is repeated till last time step data.

Fig.2.4: Structure of RNN Neural Network

The cell in the Figure can be substituted with various types of cells. We have selected

standard LSTM with forget gates in our research, introduced by F.Gers [38]. The LSTM

consist of interactive neural networks, each representing forget gate, input gate, input

candidate gate and output gate. The output value of forget gate varies between zero and one.

The function representing forget gate which forget the cell state from previous time step that

are not needed and keep the necessary information cell state for prediction represented as

1( .[ , ] )t f t t f

f W h x b (2.21)

The function representing activation function often called sigmoid which enables nonlinear

capabilities of the model

1( )

1x

xe

(2.22)

In next step, the input gate and input candidate gate activate together to make a new cell

state Ct which shifts to next time step as renewal cell state. Sigmoid activation function and

hyperbolic tangent function are used as activation function at input gates and input candidate

gate respectively providing output ii select and new cell state '

tC represented by the

equations.

Page 39: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

39

Fig.2.5: LSTM Neural Network Structure

2.9 GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK

(GRNN)

A generalized regression neural network (GRNN) is used for approximation of function [39]. It

consists of two layers in which its first layer comprises of radial basis layer and second layer

consist of a special linear layer. The architecture for the GRNN is shown in Figure 2.7.

t is similar to RBFs neural network; the only difference is addition of second layer. The input

vector is represented by P. and bias vector b1 is set to a column vector. Each neuron in the

radial basis function compute weighted input with bias value which pass through second input

layer to produce geeralized regression output.

2.10 PROPOSED HYBRID ARFIMA-LSTM MODEL

The residual white noise of ARFIMA model is processed to detect the pattern with addition of

exogenous variables in hybrid LSTM model. The noise has passed through LSTM neural

network to model left over signals with the help of external variables. Time series data

Page 40: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

40

decomposes into linear and nonlinear components which, we can express the expression as

follow

t t tx L N

(2.25)

Here tL represent linearity modeling of data with ARFIMA model which works decently on

linear problems.

t t tx L

(2.26)

Where t is the residual left by the ARFIMA Model. The LSTM model calculated by the

equation defined as :

( ) ( )t t t t

N f f x L (2.27)

Page 41: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

41

CHAPTER 3

NEURO-FUZZY MODELING AND PREDICTION OF SUMMER PRECIPITATION WITH

APPLICATION TO DIFFERENT METEOROLOGICAL STATIONS

3.1 INTRODUCTION

Research community has a growing interest in neural networks because of their practical

applications in many fields for accurate modeling and prediction of the complex behavior of

systems arising from engineering, economics, business, financial and metrological fields.

Artificial neural networks (ANN) are very flexible function approximations tool used as

universal modeling based on the separating of the past dynamics into clusters, in which we

construct local models to capture the potential growth of the series depends on the previously

known values. In this study, rain data of five major cities of Sindh province of Pakistan is

considered, and summer rainfall of these five synoptic stations are statistically evaluated for

prediction. The nonlinear autoregressive network with exogenous inputs (NARX) model for a

time series is analyzed to evaluate the pattern of precipitation. We train a highly nonlinear

NARX network model from randomly generated initial weights that converged to the best

solution with the help of the Levenberg-Marquardt algorithm. A multi-step ahead NARX

response time predictor is developed for rain forecasting. The performance of the NARX

model is viable to capture nonlinear behavior with a high value of correlation coefficient R

ranging from 0.70 to 0.99 for different synoptic stations. The results calculated using the

proposed NARX neural network time series approach are accurate and reliable based on the

coefficient of correlation and mean square error indices for rainfall forecasting. Prediction of

rain is essential for alleviation and management of flood, environmental flows, water demand

by different sectors, maintaining reservoir levels, and disasters. However, quantitative rain

forecasting is a challenge because of the complicated atmospheric process [40]. The

variation in rainfall pattern effect economic, agriculture and disaster management sectors of

Page 42: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

42

any country [41]. Due to these facts, the research community has growing interest in rainfall

forecasting and provide different studies for its accurate and reliable prediction using

deterministic solvers [42-43], as well as, stochastic procedures [44-49], i.e., develop by

exploitation of artificial intelligence techniques. The artificial intelligent based soft computing

paradigm are developed extensively by the researchers and in these solvers exploitation of

artificial neural network (ANN) modeling and optimization with global and local search

methodologies are used mainly [50-54], while the prominent recent applications and

procedures in broad fields include astrophysics [55], plasma physics [56], atomic physics [57],

nonlinear optics [58], thermodynamics [59], electromagnetics [60], nanotechnology [61], fluid

dynamics [62], electric motors [63], rotating electrical machine [64] and bioinformatics [65].

These recently reported studies established that the research in ANN based models looks

promising to be explored and exploited for rainfall forecasting dataset of meteorological

stations. The time series forecasting model with the help of ANN can forecast monthly

monsoon rainfall more accurately than the ARIMA model and the statistical model of adaptive

Neuro-Fuzzy inference system [66] Real quantitative rainfall forecasting is complicated and

challenging [67] because of a complex rainfall pattern [68]. Thus, rainfall prediction is treated

as one of the prominent problems in the class of hydrological events [69]. Forecasting

techniques, namely the statistical methods based on ARIMA regression model [70], hidden

Markov model [71], exponential smoothing with ANN [72], adaptive network-based fuzzy

inference system (ANFIS) [73], and fuzzy inference system (FIS) [74]. In recent years, NARX

has become popular [75] for forecasting in several domains. Rainfall data are

multidimensional, nonlinear, and dynamic in nature [76].The Atangana et.al [77] used

fractional differences operators to express and design real world problems. Arqub et.al [78]

presented a new method to solve physical and engineering nonlinear modeling problem

based on fuzzy differential equations. The approximate solutions of real-world problems of

second-order, fuzzy boundary value problems with kernel theory was presented by Arqub

Page 43: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

43

et.al [79]. Volterra integro-differential equations appeared in many real-life phenomena for

modeling of many nonlinear system including meteorological system. Arqub et.al [80]

presented kernel Hilbert space method for the numerical solutions of fuzzy Fredholm Volterra

integrodifferential equations. Fractional order dynamics has been used efficiently for heat

transfer and predictive control problem. Atangana et.al [81] proposed fractional derivative with

nonlocal and nonsingular kernel for the fractional heat transfer model.

Therefore, in time series datasets one exploit NARX based appropriate models for prediction.

In the present study, we apply NARX modeling for rainfall forecasting. The variation in rainfall

pattern effect economy, agriculture and disaster management sector of any country [82]

Reported studies reflects that 40% of the peoples in Pakistan are highly prone to frequent

multiple disasters [83] with variations in rainfall patterns, storms, floods and droughts. The

rainfall over Sindh province is at the result of monsoon depressions [84] forming in the Bay of

Bengal and occasionally moving westward into lower Sindh [85]. About 80% of the Sindh

rural population depends on agricultural activities [86], such as crops, livestock, fisheries and

forestry information. Severe weather becomes terrible and causes massive loss of lives and

properties [87]. In the presented study, five meteorological stations of Sindh are selected as

shown in Fig. 3.1. Data of annual and monthly rainfall amount of summer (June to August) for

42 years are analyzed to show their relationships with the annual rainfall of the summer

rainfall amounts. The data for the first 37 years is used to develop Fuzzy neural modeling,

and later a five years data is used to verify the results of models.

Study area of rainfall dataset

The study comprises rainfall data of five major cities (synoptic stations) in the Sindh province

of Pakistan. To evaluate rainfall trends in different cities of Sindh dataset of 42 years for the

period 1971 to 2012 is considered [88]. and we show their Google image in Fig. 3.1.

Page 44: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

44

Fig.3.1: Location of the gauging stations of Sindh, Pakistan

Table 3.1 Location of synoptic stations in Sindh, Pakistan

Station Latitude Longitude Elevation

Karachi 24.893 67.0281 8

Chor 25.517 69.7667 12

Hyderabad 25.392 68.3737 13

Badin 24.66 68.84 14

Nawabshah 26.25 68.41 34

Page 45: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

45

3.1.1 KARACHI

The largest city of Pakistan in the south-eastern part of the country. The city is an important

industrial centre and port on the coast of the Arabian Sea. It covers an area around 3,527

km². Monthly mean rainfall of Karachi (1971-201f2) is evaluated as shown in Fig.3.2.

Fig.3.2: Annual rainfall at Karachi (1971-2012)

Table 3.2 Probability distribution of rainfall in Karachi

Station Month

Kolmogorov –Smirnov

Distribution Statistic P-Value

Karachi

June Power Function 0.2849 0.0016

July Beta 0.1428 0.3269

August Beta 0.1145 0.5994

Page 46: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

46

The city experiences high precipitation during the monsoon season in July-August. Average

rainfall for the month of June is 10.74mm which comprises 8% summer rainfall of Karachi.

The month of June has shown an increasing trend of 4% of summer rainfall as compared to

rain data of the first two decades of the city. The best probability distribution for rain data for

the month of June in Karachi is found with power function distribution with significance values

of P < 0.05 as shown in Table 3.2.

Probability density distribution of Karachi rainfall is shown in Fig. 3.3. Beta distribution is

proposed for probability distribution for rainfall in July as it satisfies the Kolmogorov –Smirnov

test with P > 0.5. About 50 % of the data for July rainfall in the period (1971-2012) observes

less than 20 mm precipitation indicating right skewness distribution.

Probability Density Function

Histogram Power Function

x100806040200

0.88

0.8

0.72

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Beta

x300250200150100500

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Beta

x250200150100500

0.6

0.5

0.4

0.3

0.2

0.1

0

Fig.3.3: Probability density function for the month of (a) June, (b) July and (c) August for

Karachi

3.1.2 NAWABSHAH

Monthly mean rainfall of Nawabshah (1971-2012) is given in Fig. 3.4 [49]. The city

Nawabshah has observed about 5 % summer rainfalls in June. The monthly mean rainfall for

June is calculated 5.77mm, which is very less as compared to July and August. The city

faced the highest rainfall in July 2003 with an average monthly rainfall of 301.5mm. The entire

period between 1995- 2005 remains completely dry with almost negligible rainfall.

Page 47: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

47

Fig.3.4: Annual Rainfall at Nawabshah (1971-2012)

Fitting the probability distribution: The probability density distribution of Nawabshah rainfall is

shown in Fig. 3.5 and Table 3.3. The month of July almost sustains the trend of mean rainfall

for the long term while the trend line of August rainfall reflected the rainfall of the month which

is shifted toward the wetter side. About 70% of august rainfall data exists below a mean value

indicating high skewness of the August data toward the positive side of distribution.

Table 3.3 Probability distribution of rainfall in Nawabshah

Station Month

Kolmogorov–Smirnov

Distribution Statistic P-Value

Nawabshah

June Normal 0.3214 0.0002

July Gen.Extreme Value 0.0963 0.7951

Aug Gen.Extreme Value 0.1517 0.2608

Page 48: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

48

Gen Extreme distribution is proposed for Probability distribution for rainfall in July and August

with P > 0.5 as it satisfies the Kolmogorov–Smirnov test with P > 0.5 with high significance for

the proposed distribution.

Probability Density Function

Histogram Normal

x6050403020100

0.88

0.8

0.72

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Gen. Extreme Value

x300250200150100500

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Gen. Extreme Value

x2001000

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Fig.3.5: Probability density function for the month of (a) June, (b) July and (c) Aug for

Nawabshah

3.1.3 HYDERABAD

Monthly mean rainfall of Hyderabad (1971-2012) is evaluated, as shown in Fig. 3.6 [88]. The

city Hyderabad has observed about 5 % summer rainfalls fall in June. The monthly mean

rainfall for June is calculated as 5.77 mm, which is very less as compared to July and August.

Fitting the probability distribution: Probability density distribution of Hyderabad rainfall is

shown in Fig. 3.7 and Table 3.4. The analysis of the first two decades indicate that the rainfall

in June decreased by 2% in summer rainfall. The monthly rainfall of July decreased from 42%

to 37% of summer rainfall.

Page 49: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

49

Fig.3.6: Annual Rainfall at Hyderabad (1971-2012).

The August precipitation increased by 7% of summer rainfall as compared to the first two

decades. Johnson SB Probability distribution is proposed for rainfall in July with value of P =

0.7819, which is highly significant and Gen. Extreme Value distribution are estimated for

August as it satisfies the Kolmogorov –Smirnov test with significant value of P for the

proposed distribution.

Table 3.4 Probability distribution of rainfall in Hyderabad

Station Month

Kolmogorov –Smirnov

Distribution Statistic P-Value

Hyderabad

June Gen Pareto 0.309 0.0005

July Johnson SB 0.098 0.7819

Aug Gen.Extreme Value 0.157 0.2245

Page 50: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

50

Probability Density Function

Histogram Gen. Pareto

x6040200

0.88

0.8

0.72

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Johnson SB

x200150100500

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Gen. Extreme Value

x250200150100500

0.55

0.5

0.45

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0

Fig.3.7: Probability density function for the month of (a) June, (b) July and (c) Aug for

Hyderabad

3.1.4. CHOR

Monthly mean rainfall of Chor (1971-2012) is evaluated, as shown in Fig.3.8 [88]. The trend

line of mean annual rainfall of Chor (1971-2012) indicates the pattern of rainfall sustains its

shape. About ten values of monthly mean rainfall are found greater than 200mm. The highest

rainfall was measured 478mm in December 1990.

Fitting the probability distribution: Probability density distribution of Hyderabad rainfall is

shown in Fig.3.9 and Table 3.5. About 75% of June rainfall is recorded below mean value

(18.52). Chor is only station where all three months of summer exhibit similar rainfall pattern.

Gen Extreme Value distribution are found best-fitted distribution for June, July and august

with highly significant value of P by Kolmogorov –Smirnov test.

Fig.3.8: Monthly mean rainfall Chor (1971-2012)

Page 51: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

51

Table 3.5 Probability distribution of rainfall in Chor

Station Month

Kolmogorov –Smirnov

Distribution Statistic P-Value

Chor

June Gen.Extreme Value 0.1941 0.0732

July Gen.Extreme Value 0.1189 0.5516

Aug Gen.Extreme Value 0.0601 0.9961

Probability Density Function

Histogram Gen. Extreme Value

x100806040200

0.72

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Gen. Extreme Value

x3002001000

0.6

0.55

0.5

0.45

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0

Probability Density Function

Histogram Gen. Extreme Value

x3002001000

0.6

0.5

0.4

0.3

0.2

0.1

0

Fig.3.9: Probability density function for the month of (a) June, (b) July and (c) Aug for Chor

Page 52: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

52

Fig.3.10: Monthly mean rainfall Badin (1971-2012)

3.1.5 BADIN

Monthly mean rainfall of Badin (1971-2012) is evaluated, as shown in Fig. 3.10 [88]. June has

received about 7 % of summer rainfall while July has observed 40% summer rainfall. About

53% of summer rainfall is recorded for August.

Fitting the probability distribution: Probability density distribution of Badin rainfall is shown in

Fig.3.11 and Table 3.6. 70% of June rainfall has recorded below the mean value. About 12

values of monthly mean are noticed in which rainfall was greater than 200mm for the

complete rain data. The highest rainfall is measured as 459mm in august 1979.

Table 3.6 Probability distribution of rainfall in Badin

Station Month

Kolmogorov –Smirnov

Distribution Statistic P-Value

Badin

June Beta 0.2369 0.01

July Gen.Extreme Value 0.1246 0.493

Page 53: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

53

Aug Johnson SB 0.1161 0.582

Beta probability distribution is proposed for the probability distribution for rainfall in June

with P =.0.01 which is significant and Gen Extreme Value distribution.

Probability Density Function

Histogram Beta

x120100806040200

0.8

0.72

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Probability Density Function

Histogram Gen. Extreme Value

x300250200150100500

0.6

0.5

0.4

0.3

0.2

0.1

0

Probability Density Function

Histogram Johnson SB

x4003002001000

0.64

0.56

0.48

0.4

0.32

0.24

0.16

0.08

0

Fig.3.11: Probability density f estimated for July. Johnson SB is found fit for the probability

distribution in August.

3.2. STRUCTURE OF NARX MODEL

NARX uses neural network by modeling past value of time series to forecast future values. It

consists of input layer, hidden layer, delay layer and output layer. Consider we have t as an

input delay step and number of resultant delay step is s then output of jth hidden nodes have

the form

2 2 2

( ) ( 1), ( 2),..., ( ), ( 1),..., ( ) ( )

( ) ( ( ) )

( ) ( ( ) )

I I IN N N

N N N

y k h x k k x k t y k y k s k

o k f x k w b

y k f o k w b

(3.1)

Here x is input vector of P dimension and y is output vector of Q dimension. O is hidden layer

node vector of N dimension , t is delay order and s is output delay order ,IN

b is threshold of

input layer ,2N

b is threshold of hidden layer. Connection weight between hidden and delay

Page 54: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

54

layer is symbolized by 1N

w , transfer function for the hidden nodes is 1.

Nf and activation

function for output nodes is 2.

Nf network architectures. The NARX neural network is

recurrent discrete network based on time that can be expressed as:

( 1) ( ),..., ( , 1), ( 1),..., ( , 1)y n f y n y n d x n k x n d k

(3.2)

In general, we assume the delay term k which is known as process dead time (Haykin,1999)

is taken as a zero. The NARX Model reduce to the following model

( 1) ( ),..., ( , 1), ( ), ( 1),..., ( ,1)y n f y n y n d x n x n x n d (3.3)

In vector form it can be written as

( 1) ( ), ( ) ,y n f y n x n (3.4)

the vectors x(n) and y(n) represent the input and output regressors, respectively. The NARX

model, trained by Levenberg–Marquardt algorithm, because of best performance.

3.2.1 LEVENBERG–MARQUARDT (LM) ALGORITHM

The Levenberg–Marquardt (LM) algorithm was developed by Kenneth Levenberg and Donald

Marquardt in 1944, which is combination of steepest decent method (SDM) and Gauss

Newton algorithm (GNM). The algorithm is robust due to inherits convergent capability of

GNM and stability of SDM. The basic concept of LM algorithm is combined training process in

complex curvature area in which initially the algorithm switches to SDM for quadratic

approximation and then convergent is significantly improved by Gauss Newton algorithm. In

LM algorithm initially minimum of function F(x) is calculated by sum of square of nonlinear

functions calculated as:

Page 55: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

55

2

1

[ ( )]( )

2

m

i

i

f xF x

(3.5)

Jacobian of the function f(x) represented by Ji(x)and the LM method searches in direction

represented by the equation.

( ) ,T T

k k k k k kJ J I p J f

(3.6)

where k are nonnegative scalars and I is identity matrix. The update rule for weights in LM

algorithm can be represented as:

1

1 ( ) ,T

k k k k k k kw w J J I J e

(3.7)

Here ek is error vector and wk is weight vector.

3.2.2 ADVANTAGES OF LM ALGORITHM

LM algorithm is faster convergence rate than either GNM and SDM.

At each iteration LM algorithm has two possible options for algorithm’s direction.

It can handle multiple parameters at the same time.

The algorithm can find optimal solution even with unsuitable guess.

3.3. EVALUATION CRITERIA

In order to evaluate performance of proposed nonlinear combination model, we use mean

absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error

(MAPE) defined as follows:

Page 56: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

56

1

2

1

2 2 2 2

1

1( )

( )

[n ( ) ][n ( ) ]

N

t t

i

N

t t

t

MAE y yN

RMSE y yN

n xy x yR

x x y y

(3.8)

3.4. MODELING WITH NARX

The rainfall data of (2008-2012) is used to evaluate the prediction accuracy of the models

with NARX. Mean monthly rainfall data from September to May is evaluated. Out of nine

available input mean rainfall data of 03-month are selected as input variables based on

Pearson correlation with the maximum value of R with summer rainfall data. The proposed

architecture of the network is constructed using MATLAB neural network toolbox and

composed of an input layer, one hidden layer, and one output layer with a feedback

connection, as shown in Fig 3.12.

Fig. 3.12 Structure of NARX Model for the rain forecasting

In model 03, the input value of monthly mean rainfall are added as exogenous input to

improve forecasting results. The sigmoid function is used with the hidden layer neurons as an

activation function. The models contain 03 hidden neurons in one hidden layer with one

output is proposed in the NARX model. The network is trained using the Levenberg-

Marquardt as Back-propagation algorithm through time (BPTT) in epoch wise mode. The

training uses the back-propagation algorithm.

Page 57: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

57

In the modeling process, 37 years’ monthly mean rainfall data were used out of which 70%

are used for training. 15% for validation purpose and rest 15% for testing the result and last

five years data is used for forecasting and analysis of the results. In this study, the

Levenberg–Marquardt (LM) algorithm is used to train the neural network.

.4.1 KARACHI

Result for June: Three monthly rainfall data used as input values for the NARX network with

13 delays per variable, three hidden neurons in one hidden layer with one output is proposed

in the NARX Structure. After a successful building of NARX model, results are generated and

presented in Table 3.7, and Figs 3.13-3.14.

Table 3.7 Result of NARX for the month of June

Target Value MSE R

Training 26 3.14E-02 0.9652

Validation 6 4.46E-02 0.9469

Testing 5 5.46E-01 0..9968

Results for July: Three monthly mean rainfall input values with the highest Pearson

correlation is used as input NARX network with 13 delays per variable.

Page 58: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

58

Fig.3.13: Regression analysis of NARX for training, validation and test sets

Table 3.8 Result of NARX for the month of July

Target Value MSE R

Training 26 1.24E-01 0.9402

Validation 6 1.80E-01 0.7926

Testing 5 4.48E-01 0.8278

Moreover, three hidden neurons in one hidden layer with one output is proposed in the NARX

Structure. After a successful building of NARX following result was generated and are

presented in Table 3.8 and Figs. 3.15-3.16.

Page 59: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

59

Fig.3.14: Regression fit between observed and predicted rainfall of June at Karachi

Fig.3.15: Regression analysis of NARX for training, validation and test sets

Fig.3.16: Regression fit between observed and predicted rainfall of July at Karachi

Page 60: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

60

Results for August: Three monthly mean rainfall data is used as input values with highest

Pearson correlation in NARX network with 14 delays per variable, three hidden neurons in

one hidden layer with one output is proposed in the NARX Structure as presented in Table

3.9.

Table 3.9 Result of NARX for the month of August

Target Value MSE R

Training 26 1.09E-01 0.9366

Validation 6 6.05E-02 0.9291

Testing 5 1.99E-01 0.867

After a successful building of NARX following result is generated and Figs 3.17-3.18.

Fig.3.17: Regression analysis of NARX for training, validation and test sets

Page 61: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

61

Fig.3.18: Regression fit between observed and predicted rainfall of June at Karachi

Comparison of forecasting NARX and ARIMA models is performed, and results are presented

in Fig.3.19

.

Fig.3.19: Comparison of forecasting NARX and ARIMA models

3.4.2 HYDERABAD

Result for June: Three monthly input values with the highest Pearson correlation reused as

input NARX network with seven delays per variable, three hidden neurons in one hidden

layer with one output is proposed in the NARX Structure. After a successful building of NARX

following result is generated that are presented in Table 3.10 and Figs 3.20-3.22

Page 62: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

62

Fig.3.20: Regression analysis of NARX for training, validation and test sets

Table 3.10 Result of NARX for the month of June

Target Value MSE R

Training 26 3.45E-07 1

Validation 6 1.32E-01 0.8631

Testing 5 1.10E-01 0.8688

Results for July: Three monthly mean rainfall data is used as input values with highest

Pearson correlation used as in NARX network model with 14 delays per variable, three

hidden neurons in one hidden layer with one output is proposed in the NARX Structure. After

a successful building of NARX, results are generated and presented in Table 3.11 and Figs

3.22-3.23.

Page 63: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

63

Fig.3.21: Regression fit between observed and predicted rainfall of June at Hyderabad

Fig.3.22: Regression analysis of NARX for training, validation and test sets

Table 3.11 Result of NARX for the month of July

Target Value MSE R

Training 26 1.11E-01 0.9637

Validation 6 4.97E-02 0.9495

Testing 5 3.30E-01 0.7872

Page 64: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

64

Fig.3.23: Regression fit between observed and predicted rainfall of July at Hyderabad

Results for August: Three input values with the highest Pearson correlation used as input

NARX network with 12 delays per variable, three hidden neurons in one hidden layer with

one output is proposed in the NARX Structure. After a successful building of NARX model,

results are generated and presented in Table 3.12 and Figs 3.24-3.25.

Fig.3.24: Regression analysis of NARX for training, validation and test sets

Page 65: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

65

Fig.3.25: Regression fit between observed and predicted rainfall of August at Hyderabad

Comparison of forecasting NARX and ARIMA models is performed, and results are presented

in Fig. 3.26.

Table 3.12 Result of NARX for the month of August

Target Value MSE R

Training 26 1.11E-01 1

Validation 6 4.97E-02 0.9632

Testing 5 3.30E-01 0.9365

3.4.3 Nawabshah Result for June:

Three input values with the highest Pearson correlation used as input NARX network with

delays 15 per variable, three hidden neurons in one hidden layer with one output is proposed

in the NARX Structure. After a successful building of NARX system, results are generated

and presented in Table 3.13 and Figs. 3.27-3.28.

Page 66: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

66

Table 3.13 Result of NARX for the month of June

Target Value MSE R

Training 26 2.76E-02 0.99395

Validation 6 3.11E-01 0.82285

Testing 5 1.28E-01 0.8178

Fig.3.26: Comparison of forecasting NARX and ARIMA models

Page 67: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

67

Fig.3.27: Regression analysis of NARX for training, validation and test sets

Fig.3.28: Regression fit between observed and predicted rainfall of June at Nawabshah

Results for July: Three input values with the highest Pearson correlation is used as input

NARX network with delays 11 per variable and three hidden neurons in one hidden layer with

one output is proposed in the NARX Structure. After a successful building of NARX model,

results are calculated and presented in Table 3.14 and Figs.3.29-3.30.

Page 68: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

68

Table 3.14 Result of NARX for the month of July Nawabshah

Target Value MSE R

Training 26 1.48E-04 0.99961

Validation 6 1.77E-01 0.93352

Testing 5 1.97E-01 0.80644

Fig.3.29: Regression fit between observed and predicted rainfall of July at Nawabshah

Page 69: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

69

Fig.3.30: Regression analysis of NARX for training, validation and test sets

Results for August: Three input rainfall variable with the highest Pearson correlation is used

as the input NARX network with delays 12 per variable, three hidden neurons in one hidden

layer with one output is proposed in the NARX Structure. After a successful building of NARX

model, results are generated and presented in Table 3.15 and Figs 3.31-3.32.

Fig.3.31: Regression Analysis of NARX for training, validation and test sets

Table 3.15 Result of NARX for the month of August

Target Value MSE R

Training 26 1.16E-22 0.99999

Validation 6 9.38E-02 0.97716

Testing 5 7.83E-01 0.81494

Page 70: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

70

Fig.3.32: Regression fit between observed and predicted rainfall of June at Nawabshah

Comparison of forecasting NARX and ARIMA models is performed, and results are presented

in Fig.3.33.

Fig.3.33: Comparison of forecasting NARX and ARIMA models

3.4.4 BADIN

Result for June:

Three input monthly mean rainfall variable with the highest Pearson correlation is used as

input NARX network with delays 13 per variable, three hidden neurons in one hidden layer

Page 71: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

71

with one output is proposed in the NARX structure. After the building of NARX, result are

generated and presented in Table 3.16 and Figs 3.34-3.35.

Table 3.16 Result of NARX for the month of June

Target Value MSE R

Training 26 4.02E-07 0.99999

Validation 6 1.90E-01 0.93099

Testing 5 3.04E-01 -0.96067

Fig.3.34: Regression analysis of NARX for training, validation and test sets

Page 72: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

72

Fig.3.35: Regression fit between observed and predicted rainfall in June at Badin

Results for July: Three input monthly mean variable values with the highest Pearson

correlation used as input NARX network with delays 13 per variable, three hidden neurons in

one hidden layer with one output is proposed in the NARX structure. After the formulation of

NARX, the results are generated and presented in Table 3.17 and Figs 3.36-3.37.

Fig.3.36: Regression analysis of NARX for training, validation and test sets

Page 73: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

73

Table 3.17 Result of NARX for the month of July

Target Value MSE R

Training 26 1.16E-01 0.96268

Validation 6 4.95E-02 0.77466

Testing 5 6.93E-01 0.93277

Fig.3.37: Regression fit between observed and predicted rainfall in July at Badin

Results for August: Three input monthly mean variable values with the highest Pearson

correlation used as input NARX network with delays 25 per variable, three hidden neurons in

one hidden layer with one output is proposed in the NARX structure. After a successful

building of NARX model, results are generated and presented in Table 3.18 and Figs. 3.38-

3.39.Comparison of forecasting NARX and ARIMA models is performed, and results are

presented in Fig. 3.40.

Page 74: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

74

Table 3.18 Result of NARX for the month of August

Target Value MSE R

Training 26 3.72E-20 0.99999

Validation 6 2.12E-02 1

Testing 5 1.83E-02 1

Fig.3.38: Regression analysis of NARX for training, validation and test sets

Page 75: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

75

Fig.3.39: Regression fit between observed and predicted rainfall in August at Badin

Fig.3.40: Comparison of forecasting NARX and ARIMA models

3.4.5 CHOR

Result for June: Three monthly mean rainfall input variable values with highest Pearson

correlation is used as input NARX network with delays 12 per variable, 3 hidden neurons in

one hidden layer with one output is proposed in the NARX structure

Table 3.19 Result of NARX for the month of June

Target Value MSE R

Training 26 1.59E-23 0.99999

Validation 6 2.02E-01 0.99487

Testing 5 2.97E-01 0.81554

Page 76: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

76

Fig.3.41: Regression analysis of NARX for training, validation and test sets

.After successful construction of NARX, results are generated and presented in Table 3.19

and Figs.3.41-3.42.

Fig.3.42: Regression fit between observed and predicted rainfall in June at Chor

Results for July: Three monthly mean input variable values with the highest Pearson

correlation is used as input NARX network with delays 18 per variable, three hidden neurons

in one hidden layer. One output is proposed in the NARX structure. After a successful

Page 77: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

77

building of NARX model, results are generated and presented in Table 3.20 and Figs. 3.43-

3.44.

Table 3.20 Result of NARX for the month of July

Target Value MSE R

Training 26 1.81E-02 0.94229

Validation 6 3.31E-01 0.92211

Testing 5 6.66E-01 0.98156

Results for August: Three monthly input values with the highest Pearson correlation is used

as input NARX network with delays 14 per variable, three hidden neurons in one hidden

layer. One output is proposed in the NARX structure. After successful building of NARX

model, results are generated and presented in Table 3.21 and Figs 3.45-3.46. Comparison of

forecasting NARX and ARIMA models is performed, and results are presented in Fig. 3.47.

Page 78: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

78

Fig.3.43: Regression analysis of NARX for training, validation and test sets

Fig.3.44: Regression fit between observed and predicted rainfall in July at Chor

The NARX model error for the metrological station Chor is much lower in compared to other

stations as shown in Fig.3.45 and . after training, testing and validation of NARX model with

overall correlation of R=0981122.

Page 79: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

79

Fig.3.45: Regression analysis of NARX for training, validation and test sets

Fig.3.46: Regression fit between observed and predicted rainfall in August at Chor

Page 80: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

80

Table 3.21 Result of NARX for the month of August

Target Value MSE R

Training 26 4.56E-10 0.99999

Validation 6 1.36E-01 0.81172

Testing 5 2.10E-01 0.80952

Fig.3.47: Comparison of forecasting NARX and ARIMA

Page 81: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

81

3.5. CONCLUSIONS

In this research, the available monthly mean rainfall data (1971-2012) of five synoptic stations

in Sindh province are analyzed. The analysis shows that summer rain pattern of Hyderabad,

Nawabshah, Badin and Chor are highly correlated with each other with significant level up to

0.80. The summer rain pattern of Karachi city differs with other synoptic stations included in

the research, and its correlation is almost negligible. The results indicated that the NARX

Model could satisfactorily forecast results of summer precipitation in Sindh province with the

Levenberg-Marquardt algorithm. The sigmoid activation function at the hidden layer and

linear activation function at the output layer are all capable of producing accurate results. The

study finds that the NARX model produces better forecasting and faster convergence as

compared to ARIMA. Neural Network is confirmed as the most suitable technique certainly for

predicting different climate conditions. The additional weather parameters such as

temperature, relative humidity and atmospheric pressure can be included in the neural

networks modeling for more precise rainfall forecasting. The model can be improved further

by including evolutionary and fractional metaheuristic algorithm.

Page 82: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

82

CHAPTER 4

FRACTIONAL NEURO-SEQUENTIAL PARADIGM FOR PARAMETRIZATION MODELING

OF STOCK EXCHANGE VARIABLES WITH HYBRID ARFIMA-LSTM

_______________________________________________________

4.1 INTRODUCTION

Forecasting of fast fluctuated and high frequency financial data is complex and challenging

task. In this study, a novel hybrid model with the strength of fractional order derivative is

presented with their extraordinary dynamical features of deep learning Long-short term

memory networks to predict abrupt stochastic variation of financial market. Stock market

prices are basically dynamic, highly sensitive, nonlinear and chaotic in nature. Traditionally,

there are different techniques available to forecast prices in time-variant domain and due to

variability and uncertain behavior in stock prices, traditional methods, such as data mining,

statistical approaches and non-deep neural networks model are not suited for prediction and

to generalized for forecasting stock prices. While ARFIMA (Autoregressive Fractional

Integrated Moving Average) model provide flexible tool for classes of long-memory model.

Recent studies with the advancement of machine learning in deep non-linear modeling,

confirm that hybrid model, efficiently extract deep features and model non-linear functions.

Long short-term memory (LSTM) networks are special kind of RNN that maps sequences of

input observation to output observations with capabilities long-term dependencies. In this

study we have developed a novel ARFIMA-LSTM hybrid recurrent network. The ARFIMA

model filters linear tendencies better than ARIMA model in the data and passes the residual

to the LSTM model, which captures nonlinearity in the residual values with the help of

exogenous dependent variables. The model not only minimize volatility problem but also

overcome the over fitting problem of neural network. The model is evaluated using PSX

company data of stock market with RMSE, MSE and MAPE with comparison of ARIMA,

Page 83: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

83

LSTM model and Generalized regression radial basis neural network (GRNN) ensemble

method independently. The forecasting performance indicates the effectiveness of the

proposed AFRIMA-LSTM hybrid model improves 80% accuracy in term of RMSE as

compared to traditional forecasting techniques.

The fast emergence of digital economics is one of the most innovative contribution in the

modern global economy. With development of globalization trades and business contact on

financial activities among nations are increasing. The international trades and financial

business are closely connected with stock rates [89]. The rapid development of digital

currencies in financial market abrupt impact on movement of stock prices [90]. The

forecasting of financial data depends on collection frequency of financial market. The

modeling of high frequency data of finance has becoming a research focus in the research

community.

Forecasting future values with time series has been a major research area since ago. The

application of time series modeling finds it significance in business, stock exchange, weather,

electricity demand and many other fields [91]. The accurate forecasting of stock prices can

help investor as a guideline to minimize risk and reducing investment losses [92]. Scientific

way of modeling time series emerged when Box-Jenkins [93] introduced the methodology for

time series in 1970 in which ARIMA model was introduced to forecast future behavior. The

traditional time series forecasting methods depend mainly on exponential smoothing, Auto

Regression and on Moving Average parameters, including ARMA model, ARIMA model [94],

GARCH model [95]. Peters [96] noted that dynamic nature of stock market which are mostly

non-Gaussian in nature with sharper peaks and fat tails [97]. In the presence of such

evidence, the traditional methods have their own limitation to provide accurate forecast based

on non-Gaussian data [98]. Sheng and Chen [99] proposed a new Autoregressive Fractional

Integrated Moving Average (ARFIMA) model to analyze the GSL data to predict the future

levels and compared accuracy with previous published results [100]. The ARFIMA class

Page 84: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

84

model presented by Diebold et.al. [101] provided flexible techniques to capture long memory

process. Neural network was used by Gao et.al. [102] to predict daily closing prices of S&P

500 stocks exchange. The ARIMA and neural network hybrid model was discussed in Peter

Zhang’s literature [103]. Chen et.al. [104] predicted stock exchange data of China stock

market using sequential features of LSTM, He used 30 days long sequence step with 10

learning features in the model. Comparison of traditional ARIMA model with deep learning

features of LSTM for economics and financial data was carried out by Siami et.al. [105].

Stock prediction using LSTM and MLP model was estimated by Khare [106]. Short Hybrid

ARIMA-LSTM model was presented by Choi et.al. [107] in which stock price correlation

coefficient was analyzed by applying LSTM recurrent neural networks. The effect of currency

and foreign exchange on stock market volatility was studied by Fang [108]. Fractional order

derivative is generalized form of integer derivative which is extensively applied for modeling of

different real phenomena in finance, psychology, bioengineering. mechanics and control

theory. The concept fractional order derivative emerged back in 1695 with famous

correspondence between L’Hopital and Leibniz about possibility of fractional order

derivatives. The first application of Fractional order mathematics contributed by Abel [109] in

1823 who solved autochrome integral order problem with the fractional order derivative of half

order. The application of Fractional order differential equation has introduced new concept

and techniques in financial market forecasting. Modeling with fractional order and Adomian

decomposition method was introduced by Song et.al. [110] with application in approximate

semi analytical solution of European price model and China’s financial market. Biologist

deducted that biological organism have fractional order electric conductivity in their cell

membrane [111], which is classified as noninteger group models. The Kumar et.al.[112]

proposed to estimate coefficients of fractional order differential equation.

4.1 OBJECTIVE OF STUDY

There are two main objectives of the study:

Page 85: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

85

(d) To analyze the time series data and identify the nature of phenomenon in the

sequence of observation and study the pattern based on fractional differences.

(e) Forecasting nonlinear time series and predict future values on the bases of

pattern identified.

The innovative contributions of designed hybrid neurocomputing approach with exploration of

different capabilities are presented in terms of following salient features: -

It provides flexible tool for classes of long-memory model

The ARFIMA model filters linear tendencies better than ARIMA model in the data

The model overcome the over fitting problem of neural network besides minimizing

volatility problem.

Dynamical features of the model captures with ARFIMA-LSTM with the help of

exogenous dependent variables.

4,1.1 DEFINITION 1: GRUNWALD-LETNIKOV

Grunwald-Letnikov [113] presented generalized form of fractional order using binomial

expansion.

[(x a)/h]

00

1(t) lim ( 1) (t jh)

j

a th

j

aD f f

jh

(4.1)

Where a

j

is binomial coefficient and a is constant order, which can express by Euler’s

Gamma function defined as follow :-

( 1)

(j 1) ( j 1)

a

j

(4.2)

Page 86: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

86

4.1.2 DEFINITION 2: MICHELE CAPUTO

Michele Caputo [114] defined fractional order by applying integral equation as follow: -:

(n)

1

1 (

(n

)(t)

( )) t

t

c

a t n

a

fD f d

(4.3)

where is real number and n is an integer Grunwald-Letnikov definition is identical to Caputo

definition for fractional derivative except in case of constant function for which Caputo

derivative is zero, while Riemann-Liouville derivative of constant is a non-zero value.

4.1.3 DEFINITION 3: ATANGANA-BALEANU

The left Atangana-Baleanu [115] definition in term of fractional derivate for the interval 0<α<1

in Sobolev space is defined by:-

0

( )(h)(x) (s) [ (x ) ]ds

1 1

xB

T h E s

(4.4)

where 1(0,1)h H in Sobolev space, ( ) 0B is function in normalized form satisfying the

condition : (0) (1) 1B B and E is Mittag-Leffler function of single variable.

4.1.4 DEFINITION 4: RIEMANN-LIOUVILLE

Riemann-Liouville [116] used derivatives instead of integral order to defined fractional order

derivatives defined as:-

11(t) [ (t ) ( ) ]

(n )

tnc n

a t n

a

dD f f d

dx

(4.5)

Page 87: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

87

Fractional derivative by using definition of Riemann-Liouville in term of gamma function is

defined as

(m 1)

(m q 1)

qm m q

q

dx x

dx

(4.6)

for m=2 the equation become

2 2(2 1)

(2 q 1)

qq

q

dx x

dx

(4.7)

by taking fractional derivatives of order 0.75, 0.50, 0.25, 0.1 and 0.01 the geometrical

representation of fractional derivative is shown in Figure.4.1.

Fig 4. 1 Fractional order representation of function f(x)=x2

4.2 FRACTIONAL TIME SERIES

Fractional Time series was developed by Harold Hurst [117] while calculating optimal dam

size for the river Nile which was directly linked with fractional dimension of the dam .Consider

d as periodic time duration over the range R, which is calculated by differencing of largest

and smallest deviation encountered during d time interval which can be represented as:-

Page 88: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

88

R α dH

where H is the Hurst exponent varying from zero to one and the higher value of Hurst

component was represented with smaller size of curve.

4.3 GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK

(GRNN)

Mathematically GRNN [118] can be represented by the equation .

1

1

(x, x )

(x)

(x, x )

N

k k

k

N

k

k

w K

Y

K

(4.8)

Where Y(x) is prediction for input variable x, k

w is activation weight for the pattern layer and

(x, x )k

K is Gaussian radial basis function formulated as:

2/2

(x, x ) kd

kK e

(4.9)

where d is Euclidean distance defined as: ( ) ( )T

k k kd x x x x

4.4 STATISTICAL DESCRIPTION OF DATA

In this Section the statistical description of Fauji Fertilizer Company (FFC) open price data

[128] is presented. The probability distribution graph of FFC open price We have used daily

open price data from 01 January 2009 to 30 May, 2018 with n=3437 observation. However,

for modeling purpose we have considered the daily data till 30 April, 2018. The remaining

data of one month is used to analyze the forecasting behavior of the proposed model. From

the graphical analysis it is easy to identify most expressive increasing trend from 01 January,

2009 up to 19 October, 2011, then, sudden declining trend in open price can be noticed until

Page 89: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

89

05 January, 2015 followed by increase in open prices till 19 December, 2017 after which last

descending trend was noticed till 30 May, 2018 as shown in Figure.4.2.

Fig 4. 2 Graphical representation of FFC daily data 2009-2018

The highest variation in seasonal data can be noticed in the month of March, June, July and

December of each year from 2009 to 2018 as shown in Figure.4.3

Fig 4. 3 Probability distribution of FFC open Price

FFC open price data has shown sharper peaks which represent high frequency data which

various out of non-Gaussian distribution curve as shown in Figure.4.3

Page 90: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

90

The probability distribution of data with percentile Gaussian distribution is shown in Figure

4.4. The P value of the fit is less the required value for normal distribution, particularly FFC

open value greater than 114 do not fit in Gaussian distribution and produced kurtosis in

vertical spread.

Fig 4. 4 Percentile Gaussian fit of FFC open price

Fig 4. 5 Seasonal plot of FFC company from 2009-2018

Page 91: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

91

Seasonal plot of FFC open price with strong upward trend and high degree of automation

trading were noticed in the data as shown in Figure 4.5 depicts that yearly variation in the

month of June, July and at the end of each year remain high as compared to the remaining

months.

Fig 4. 6 Graph of dependent variables used in the modeling.

Statistical description of dependent variables used to predict FFC open price is presented in

Table 4.1 and Figure 4.6. The correlation between oil prices and FFC open price remains

high as compared to other dependent variables. The relationship between foreign reserves

and FFC open price perfectly remained very close in the highest variation years of 2012 and

2016.

Page 92: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

92

Table 4.1 Statistical description of FFC open price with the dependent variable

Statistics FFC open PK Currency KSE100

index

Oil

Prices

Foreign

Reserves

Mean 114.67 96.52 24359.1 73.47

22126.75

Median 112.02 98.65 22930.1 76.01

16432.42

Mode 114 104.8 10519 44.66

13248.56

St Dev 21.66 9.49 13432 22.46

24086.01

Kurtosis 1.48 -1.19 -1.26 -1.37

18.05

Skewness 0.93 -0.26 0.32 -0.12

4.23

Minimum 58.73 68.21 4815.34 26.21

7589.6

Maximum 198.35 115.64 52876.5 113.93

170454

4.5 ARIMA AND ARFIMA MODEL

In this section we will discuss some basic concept and background of both models, after that

the proposed hybrid model will be described.

4.5.1 ARIMA MODEL

The mathematical representation of ARIMA Model was first introduced by Box and jekin’s

[119] in his book in 1970 to forecast the future trend representing by the equations as:

Page 93: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

93

1 1 2 2

1 1 2 2

1

1 1

...

....

t t t p t p

t t q t q

qP

t t k l t l

k l

x c x x x

x c x

(4.10)

Where 1( ) 1 ...

p

pB B B

And 1( ) 1 ...

p

pB B B

are polynomial in B and ( 1,2,..., )i

i p and ( 1,2,..., )i

i q are autoregressive and moving

average parameters i is representing white noise with mean zero and variance

2 and

such a time series with white noise not depending on their own previous terms but also

depend on other phenomena and other variables [120].

A process {t

X } with value of t=1,2,…,T satisfying (1 ) ( )d

t ty B X

become long memory process [121] after satisfying the following condition.

(a)

lim( )k

n

nk n

is not finite ie ACF process diverge.

(b) The series { tX } is fractional differenced series.

ARIMA(p,d,q) model can only capture short range dependency with d as integer order, where

ARFIMA model which was introduced by Granger and Joyeux [122]. used long range

dependent time series.

We have used R software to fit the data with ARIMA and ARFIMA models. The residual of

ARIMA fitted model of FFC open price with ACF lag and Residual plot is shown in Figure .4.7.

Page 94: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

94

Fig 4. 7 ARIMA Residual plot and its ACF and PACF Lag plot of FFC open price.

4.5.2 ARFIMA MODEL

ARFIMA (p,d,q) model define d for any real number using binomial expansion and Gamma

function as

0 0

( 1)(1 ) ( ) ( )

( 1)( 1 )

d j j

j j

d dB B B

j j d j

(4.11)

where 1/ 2 1/ 2d

Shaofei et.al[123] and many other authors [124] suggest the use of Fractional ARIMA instead

of an integer one can improve forecasting. The general form of ARFIMA (p,d,q) process

defined as

( )(1 ) ( )d

t tB B X B (4.12)

where 1/ 2 1/ 2d

Page 95: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

95

The above model widely used for LRD and SRD time series [125]. In ARFIMA (p,d,q), p is

autoregressive order, q is moving average order and d is differencing in decimal form. The

ARFIMA (p,d,q) process is generalized form of ARIMA process as for integer value of the

ARFIMA model shift to ARIMA model. Many non-stationary time series contain nonlinear

trend and removing the trend is the first step of modeling of such time series. Box-Jekins

theory served as a filter point to separate signals from the noise. In the residual of ARIMA

model in the Figure 4.7 we may notice a pattern of fractional correlation that commence with

first lag. In such condition fractional differences are useful to capture non-linearity by applying

binomial expression to estimate ARFIMA(p,d,q) parameters. By applying fractional order

difference filter, the residual obtain is uncorrelated with lags of its variables. Mandelbrot [126]

suggested the use of range over standard deviation R/S statistics called “rescaled range”,

which used by hydrologist Harold Hurst [127] in the Hurst exponent produced. The main

concept of R/S analysis is to analyze rescaled cumulative deviation from the mean. The first

estimation of Range R is given by:

1,2,... 1,2,...

1 1

max ( ) min ( )n n

n m n j m n j

i i

R Y Y Y Y

(4.13)

where n

R is range of Accumulated deviation defined over period n of Y. The standard

deviation n

S is defined as

2 1/2

1

[ ( ) ]n

n j

i

S Y Y

(4.14)

with the increase in n it holds the equation

log[ / ] log logn n

R S H n (4.15)

Page 96: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

96

Which reflects linearity in estimation of Hurst slope H. In the ARFIMA model the intensity d of

fractional Gaussian noise of the data is estimated with maximum likelihood Hurst Parameter

defined as:-

d=h-1/2 (4.16)

The relationship permits researchers to define certain boundaries to some limit as follow: -

(a) if d=0 the process does not contain long term memory and is stationary.

(b) if 0<d<1 the process is persistent with long term memory.

(c) If d=0.5 the process represents random walk and unpredictable.

Estimation of d in financial data series is different from 0 and 0.5. Caporale [129] pointed out

presence of long-term memory in US Stock Exchange. : Parameter estimation result

ARFIMA(1,d,3) for

FFC company is shown in Table 7.2. ARFIMA Residual plot and its ACF and PACF Lag plot

for the FFC Company is shown in Figure 4.8. The best fitted fractional difference is calculated

as d=0.499914.

Table 4.2 Parameter estimation result ARFIMA(1,d,3) for FFC company

Parameter Coefficient Std err t-Ratio p-Value

d 0.499914 0.00123 14.32 0.003

ᴪ1 -0.60693 0.0188 18.65 0.021

ᴪ2 -0.44672 0.02083 -3.2 0.01

Page 97: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

97

constant 2.547838 3.01795 4.06 0.014

Fig 4. 8 ARFIMA residual of open price FFC Company from 2009-2018

4.6 LSTM MODEL

Neural networks are efficient to extract nonlinear features for long memory data because of

its versatility and use of nonlinear activation functions in each layer. Kumarasinghe et.al. [130]

designed Long Short-Term Memory (LSTM) network for intelligent prediction of Colombo

Stock Exchange. To understand the working of LSTM model, consider the RNN mechanism

which is sequential model that performs effectively by sequencing time series data as a input

vector and provide vector output by neural network structure in the model’s cell as shown in

Figure 4.9. The time series data passed through cell in sequential vector, at each step the cell

output value of cell is concatenated with next time step data and the output value of cell serve

as input for the next time step.

Fig 4. 9 Structure of RNN Neural Network

Page 98: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

98

Fig 4. 10 Overall graphical abstract of the proposed technique, ARFIMA-LSTM for

modeling of FFC open price

Step 1

Data cleaning and

Transformation

Transformation:

Apply Transform to detrend

and remove seasonality

Data cleaning :

Check irrelevant and

repeated terms

Normalization :

Normalize the data between

0 and 1

Add exogenous parameter

Initialization parameters

Estimate AR and MA parameters by

Box- Jenkin s methodology

Restimate

estimate

parameters Ø

and Ө until

convergent

series obtain

No

Estimate d using wavelet

analysis

Termination

Criterion at

tolerance level

Achieved?

Residual

ARFIMA residua

(Gaussian noise )

Yes

Step3

LSTM Modeling

Step 2

ARFIMA process

Refinement:

Optimization

variable at

each

iteration

Termination

Criterion

Achieved?

Residual

Step 4

Combine the Model

Backtesting Statistics :

Test the estimate on data

outside the sample collected

Step 5

Results

Comparison:

Compare the proposed results

with Numerical solutions

Approximate Solutions:

Calculated hybrid model

solutions with

ARFIMA-LSTM

ARFIMA

Fitted Model

LSTM

Fitted Model

Fitting ARFIMA:

Apply the Model

parameters to calculate

ARFIMA sequence

Fitting LSTM:

Apply the Model

parameters to calculate

LSTM sequence

Performance indices:

Compare with RMSE and MSE

Page 99: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

99

The cell in the Figure can be substituted with various types of cells. We have selected

standard LSTM with forget gates in our research, introduced by F.Gers [131]. The LSTM

consist of

interactive neural networks, each representing forget gate, input gate, input candidate gate

and output gate as shown Figure 4.11. The output value of forget gate varies between zero

and one. The function representing forget gate which forget the cell state from previous time

step that are not needed and keep the necessary information cell state for prediction

represented as

1( .[ , ] )t f t t f

f W h x b

(4.17)

The function representing activation function often called sigmoid which enables nonlinear

capabilities of the model

1( )

1x

xe

(4.18)

In next step, the input gate and input candidate gate activate together to make a new cell

state Ct which shifts to next time step as renewal cell state. Sigmoid activation function and

hyperbolic tangent function are used as activation function at input gates and input candidate

gate respectively providing output ii select and new cell state '

tC represented by the

equations.

1

'

1

( .[ , ] )

tanh( .[ , ] )

t i t t t

t c t t c

i W h x b

C W h x b

)(4.19)

The tanh function is hyperbolic tangent function which render between -1 and 1

Page 100: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

100

tanh( )x x

x x

e eX

e e

(4.20)

Fig 4.11 Hybrid LSTM Neural Network Structure

Augmented Dickey–Fuller (ADF) test is used to transform non-stationary time series to

stationary time series. The LSTM input is residual of open price FFC historical data modeled

by ARFIMA model. We have also used the depend variables to model the residual values of

FFC data after filtered by ARFIMA Model.

4.7 GENERALIZED REGRESSION RADIAL BASIS NEURAL NETWORK

(GRNN)

A generalized regression neural network (GRNN) is used for approximation of function [132].

It consists of two layers in which its first layer comprises of radial basis layer and second layer

consist of a special linear layer. The architecture for the GRNN is shown in Figure 4.12. It is

similar to RBFs neural network; the only difference is addition of second layer. The input

vector is represented by P. and bias vector b1 is set to a column vector. Each neuron in the

radial basis function compute weighted input with bias value which pass through second input

layer to produce generalized regression output.

Page 101: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

101

IWI,L

|| dist||

b 1,1

jp

P

R×1

1

R

Q×R

.

Q×1

Q×1

Q×1

nL a

L

Q×1nprod

L.W2.L

Q×Q

Q×1

n2

a2-y

Q×1

Q Q

1,L1a radbas IW p bI

22 (n )a purlin

input Radial Basis Layer Special Linear Layer

Fig 4.12 The architecture of Generalized regression radial basis neural network

Where

R= no of element in input vector

Q= no of neuron in each Layer

4.8 PROPOSED HYBRID ARFIMA-LSTM MODEL

The residual white noise of ARFIMA model is processed to detect the pattern with addition of

exogenous variables in hybrid LSTM model. The noise has passed through LSTM neural

network to model left over signals with the help of external variables. Time series data

decomposes into linear and nonlinear components which, we can express the expression as

follow

t t tx L N

(4.21)

Here tL represent linearity modeling of data with ARFIMA model which works decently on

linear problems.

Page 102: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

102

t t tx L

(4.22)

Where t is the residual left by the ARFIMA Model. The LSTM model calculated by the

equation defined as :

( ) ( )t t t t

N f f x L (4.23)

While tN represent non linearity modeling for the period t of the time series data modeled by

ARFIMA residual and dependent variables with the hybrid LSTM neural network. The two

models are combined to comprehend both linear and non-linear tendencies of the data.

Fig 4.13 Hybrid LSTM model of FFC data open price with sequential correlation

In predictive model selection we have used 30 steps forecast to evaluate performance of the

model as shown in Figure 4.13. LSTM model for training, testing and prediction phases are

depicted in the form of algorithm as follow:-

Algorithm : LSTM model training algorithm

Page 103: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

103

The LSTM prediction algorithm works in the following four main phases:

(a) Data preprocessing

(b) Fixing model parameters

(c) Model fitting and estimation

(d) Model prediction

Input: Residual: Residual values of the ARFIMA model with 05 dependent variables.

N. step: The lag step between each input and output of the time series data.

Output: train.Pred, test.Pred: The predicted train data and test data of the multivariate time

series.

Phase1: Data preprocessing

(a) Normalize Residual data

(b) Convert into input/output with of 75:25

(c) Train.LSTM,Test.LSTM = divide (Residual, 0.75)

(d) X.train,y.train = split(train.LSTM, N.step)

(e) X.test,y.test = split(test.LSTM, N.step)

(f) Reshape train and test inputs data

Phase 2: Determine model parameters

(g) Define the Model

(h) Add LSTM(units=30,activation).activation=’relu’,

Page 104: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

104

(i) Input.shape=(N.steps,n.features)

(j) Add LSTM(units=30,activation).activation=’relu’)

(k) Add Dense (n.features=2)

Phase 3: Model fitting & estimation

(l) Repeat

(m) Forward.propagate model with X.train

(n) Backward.propagate model with y.train

(o) Update model parameters

(p) MSE, MAE = evaluate.model (X.train, y.train)

(q) If MSE converged:

(r) End otherwise Repeat

Phase 4 Prediction

(s) Train.Pred = predict (X.train)

(t) Test.Pred = predict (X.test)

(u) Return train.Pred, test.Pred

4.8.1 EVALUATION CRITERIA

In order to evaluate performance of proposed nonlinear combination model, we use mean

absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error

(MAPE) defined as follows:

Page 105: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

105

1

2

1

1

1

1( )

1100%

N

t t

i

N

t t

t

Nt t

i t

MAE y yN

RMSE y yN

y yMAPE

N y

(4.24)

4.9 EXPERIMENTAL RESULT OF ARFIMA-LSTM

LSTM Model training and cross validation is carried out using Adam algorithm using 75:25

proportion of data set as training and testing process respectively. Model performance are

measured using mean absolute error (MAE), root mean square error (RMSE) and mean

absolute percentage error (MAPE) as formulated in Eq 12-. The performance accuracy of

each model is summarized in Table 4.3, and their forecasting result are described in Table .4

and in Figure.4.14.

Fig 4.14 LSTM model fitting of residual of ARFIMA model FFC open price

Training and Testing error of LSTM Model open price of FFC data found minimum at 150

epochs as shown in Figure 4.15. The hybrid ARFIMA-LSTM achieved lowest RMSE of 0.73

as compared to LSTM, ARFIMA and ARIMA individually. The comparison of result by the

different model mentioned above is shown in Table 4.4.

Page 106: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

106

Fig 4.15Training and Testing error LSTM Model residual of FFC open price.

Table 4.3 Forecast statistics using ARIMA, ARFIMA and hybrid ARFIMA-LSTM 3. The FFC

MODEL MAE RMSE MAPE (%)

ARIMA 0.1566 0.3132 0.1896

ARFIMA 0.1352 0.2704 0.1633

ARFIMA-LSTM 0.02694 0.0539 0.002

GRNN 0.0315 0.0629 0.0114

Fig 4.16 GRNN architecture for prediction of FFC open Price

Page 107: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

107

In the GRNN modeling we used two layer, in first layer total of 2316 neurons were used to fit

regression with RBFs neural network as shown in Figurer 4.15 .The 3317 observation of daily

FFC stock open price data from 01 January, 2009 to 30 April, 2018 was used for training

output in Generalized regression radial basis neural network while the three modeled variable

ARIMA, ARFIMA and ARFIMA-LSTM were used as input training purpose The remaining 30

values of 03 modeled variables for the month May,2019 was used to predict FFC open price

in Generalized RBFs neural network.

Table 4.4 The FFC forecast results using ARIMA, ARFIMA and hybrid ARFIMA-LSTM

DATE FFC ARFIMA-LSTM GRNN ARIMA ARFIMA

2-May-18 99.76 103.89 103.96 58.74 119.6

3-May-18 99.58 105.06 103.96 67.88 119.49

4-May-18 98.3 103.81 103.96 69.51 119.4

7-May-18 98.05 103.65 98.24 71.23 118.73

8-May-18 99.39 102.42 98.24 74.8 118.47

9-May-18 97.21 99.85 98.24 78.45 115.84

10-May-18 97.44 98.99 97.37 81.6 115.29

11-May-18 98.05 97.93 96.55 77.78 115.36

14-May-18 98.14 98.29 96.55 73.94 114.07

Page 108: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

108

15-May-18 96.49 101.29 105.96 74.53 113.5

16-May-18 96.61 95.51 96.55 58.74 119.6

17-May-18 96.59 94.11 96.55 67.88 119.49

18-May-18 96.51 95.1 96.55 69.51 119.4

21-May-18 94.7 97.36 97 71.23 118.73

22-May-18 94.22 94.03 87.38 74.8 118.47

23-May-18 98.42 94.3 87.38 78.45 115.84

24-May-18 98.75 94.47 96.55 81.6 115.29

25-May-18 98.33 94.36 96.55 77.78 115.36

28-May-18 97.95 92.54 86.27 73.94 114.07

29-May-18 98.56 94.1 86.27 74.53 113.5

Graphical comparison of FFC forecast results using ARIMA, ARFIMA, GRNN and hybrid

ARFIMA-LSTM is shown in Figure 4.16 and Error comparison for proposed model with its

comparison are shown in Figure 4.17 and 4.18 .

Page 109: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

109

Fig 4.17 Graphical comparison of FFC forecast results using ARIMA, ARFIMA, GRNN and

hybrid ARFIMA-LSTM

Fig 4.18 Graphical comparison of MAE Error FFC open price forecast

Page 110: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

110

-5

5

15

25M

AE

DATE

Modeling Error of FFC open Price

GRNN ARFIMA_LSTM arfima arima

Fig 4.19 Parametric comparison of MAE Error FFC open price forecast

4.10 CONCLUSION

In this paper, hybrid ARFIMA-LSTM is proposed based on combination of ARFIMA modeling

and its residual by LSTM. The hybrid model extracts potential information from the residual

with the help of exogenous dependent variables and achieves better performance in terms of

prediction accuracy by combining both models. The addition of exogenous input of

dependent variables in hybrid ARFIMA-LSTM improves prediction accuracy as compared to

ARIMA, ARFIMA and GRNN independently. Error analysis for all the models is presented in

Table 4.3, which reflects the proposed model acquires lowest MAPE of 0.002%. Therefore, it

can be concluded that the proposed hybrid ARFIMA-LSTM model outperforms as compare to

individual models independently. The superior performance of the proposed hybrid model

significantly proved the best parameterized model to enhance the financial series prediction

with high-accuracy rate.

Acknowledgment

We extend our thanks to Syed Asghar Abbas Naqvi Regional Head, Islamabad Pakistan

Stock Exchange for providing us with PSX dataset that has been used in the research.

Page 111: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

111

Data Availability

All datasets generated during the current study are available from the corresponding author

upon request.

Page 112: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

112

CHAPTER 5

DESIGN OF HYBRID NAR-RBFS NEURAL NETWORK FOR DYNAMICAL ANALYSIS OF

NONLINEAR DUSTY PLASMA SYSTEM

5.1 INTRODUCTION

Robust modeling of multimodal dynamic system is a challenging and fast-growing area of

research. In this study, an integrated computing paradigm based on Nonlinear

Autoregressive Radial Basis Functions (NAR-RBFs) neural network model, a new family of

deep learning with the strength of hybrid artificial neural network, is presented for the solution

of nonlinear chaotic dusty system (NCDS) of tiny ionized gas particles arising in fusion

devices, industry, astronomy and space. In proposed methodology, special transformations

are introduced for class of differential equations, which convert local optimum to global

optimum. The proposed NAR-RBFs neural network model is implemented on bi-model NCDS

represented with Van der Pol-Methiew Equation (VdP-ME) for different scenarios based on

variation in dust gain production and loss for both small as well as in large time domains.

Excellent agreement of the result with standards state of the arts numerical solver is verified

by attaining RMSE up to 10-38

for the bi-model VDP-ME. Accuracy of proposed model in the

critical time domain is also validated by convergent, stability and consistency analysis on

statistics calculated from absolute error, root mean square error and analysis of variance

metrics. The method can help to build generalized framework in modeling of higher order

modeling of ODEs and PDEs beyond Nano-technology particularly in unstable region of the

systems. The machine learning technique of multimodal systems is vibrant and multi-

disciplinary field which increasing importance with extraordinary potential ranging from audio-

visual speeches and human multimodal behaviors to recent explosion of interest in fusion

devices, industry, astronomy and space. The dynamic behavior of different systems in

science and technology are modeled [133] by differential equations in term of time series

Page 113: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

113

function. The exact parametric solutions of the differential equations are difficult and

impossible [134] in many Cases. The mathematical model based on artificial intelligence have

universal capabilities [135]. The micro-level of modeling for differential equation by neural

network has shown a good promise [136] based on machine learning results. The solutions of

stochastic differential equation by deep learning are many times unstable and trapped in local

optima [137] particularly for higher order differential equations. In the absence of true solution

for ODEs and PDEs long computational time is required for computing [138] or even the

procedure sometimes fail [139] due to unavailability of network transfer function between time

steps of discrete modeling. Robust modeling of multimodal dynamic system with machine

learning process remains a unique challenge for the researchers. The combining different

modalities with varying level of stochastic noise particularly in unstable and critical region is

difficult to model with single machine learning techniques. Initial research on multimodal

research in the field of audio-visual speech recognition was done by B. P. Yuhas et.al.,[140]

in 1989. The major contribution in the form of review and summary on recent work on Meta-

Analysis of multimodal system was published by D’Mello, et.al, [141] in 2015. He revealed in

his work that by involving more than one modality separately in modeling of the multimodal

system increases the performance of modeling but the improvement is lessened when

recognizing naturally-occurring emotions. Bernardi et.al, [142] in 2016 suggested probabilistic

graphical model to construct representation by latent random variable. Autoencoders was

used by J. Ngiam et.al, [143] in 2011 to build multimodal pattern with end-to-end trained

neural networks. Kalchbrenner et.al, [144] in 2013 introduced recurrent continuous translation

multimodal which first converts a source modal into vectoral form and then uses decoder

module to build target modality. J. Rajendran et.al,[145] in 2015 used bridge correlation

neural networks for multimodal representation. Dusty plasma Van der Pol-Methiew Equation,

being also bimodal dynamical system is used by many researchers for the development of

machine learning paradigm. Dusty plasmas are ionized gases [146] which process

Page 114: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

114

discharges tiny matter particles, available in fusion devices or space. Dust void and dust

solution intensively depend on plasma’s ionization constrain [147] in complex plasma system.

Exact solution of ionization parameter for evolution of dust voids and its subsequent decay in

the critical time domain is difficult [148] to achieve. The use of radial basis functions is an

active field for regression modeling research. Neural network modeling with sigmoid transfer

function and Levenberg-Marquardt training algorithm often results as over fitting and under

fitting on the noisy data [149]. Moreover, the simple neural network and algorithms including

Nelder-Mead method [150], directed search method [151], gradient method [152] etc, restrict

parametric estimation [153] for the solution of chaotic differential

equations. Lee and Kang [154] in 1990 used Neural Network algorithms to solve first order

ordinary differential equations by applying Hopfield neural network. Then another contribution

was made by Meade and Fernandez [155] by developing B1-Spline in 1994 for solution of

linear as well as nonlinear ordinary equations. Lagaris et.al,[156] used artificial neural

networks based on Broyden Fletcher Goldfarb Shanno (BFGS) algorithm for solving ODEs

and PDEs. Lagaris and Likas [157] explored neural network for boundary value problems

including irregular boundaries. Parisi et.al, [158] presented solution of ODEs with feed

forward neural network. The hybrid techniques with potential of optimization presented by

Malek and Shekari [159] to solve higher order ODEs. Generalization ability of radial basis

function for solving differential equation was discussed by Choi and Lee [160] in 2009. Yadi

et.al, [161] used kernel least square algorithm for ODEs solution. Selvaraj and Samant [162]

developed new algorithm with neural networks for matrix Riccati differential equations. Feed

forward neural network with back propagation algorithm was applied by Mall et.al, [163] for a

class of PDEs. A survey was conducted by Kumar and Yadav [164] for RBF neural networks

method and multilayer perceptron method for solutions of differential equations. A new

emerging turn originated by Tsoulos et.al, [165] who solved ODEs based on grammatical

evolution. Numerical solution of elliptic PDEs by RBF neural network was presented by

Page 115: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

115

Jianyu et.al, [166]. Unsupervised RBF neural network method with multilayer perceptron for

numerical solution of PDEs was discussed by Shirvany et.al, [167]. Multi quadric RBF

network was applied by Mai-Duy [168] for numerical solution of differential equations.

Leephakpreeda [169] solved differential equations using fuzzy logic. Mosta and Sibanda [170]

solved Van der Pol bimodal equation by linearization method. Akbari et.al,[171] used

algebraic method for the solution of the Duffing equation. Nourazar and Mirzabeigy [172]

approximated the numerically Van der Pol equations by modified transformation techniques.

Homotopy analysis method was used by Kimiaeifar et.al, [173] for the solution of double-well

and double hump Van der Pol–Duffing oscillator equations. Active control technique was

applied by Njah and Vincent [174] for the solution of double-well Duffing–Van der Pol

oscillators equation. Segmenting recursion method was used by Zhang and Zeng [175] for

the solution of the duffing Equation. Hu and Chung [176] investigated Stability analysis of Van

der Pol equation. Raja et.al, computed solution of Mathieu’s systems with strength of

intelligent computing [177].

The radial basis function network is powerful multilayer perceptron which is used for universal

approximation, function approximation, interpolation and pattern recognition [178]. Due to its

dynamic architecture design, Neural network is used for modeling of noisy irregular dynamic

behavior and chaotic nature of nonlinear system, results poor fitting and over fitting of the

exact interpolation. When the nonlinear deterministic system exhibits irregular behavior,

conventional approximate techniques and the multilayer perceptron (MLP) even with back

propagation algorithm are unable [179] to paradigm the chaotic behavior of the system. The

sigmoid function widely used as global approximater, but most difficult to identify and

estimate local features [180] and topology of input-output connections. We have proposed

hybrid model which accelerates numerical computing to save computational power and

storage capacity for time and parametric dependent ODEs and PDE and its results are

verified on dusty plasma differential Equation.

Page 116: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

116

We have presented hybrid Gaussian radial basis multilayer perceptron with deep nonlinear

autoregressive (NAR) time series neural network model in two-step training algorithm. MLP

with nonlinear transfer functions in NAR dynamical neural network modeling to model global

features will be designed based on Gaussian radial basis function neural network to identify,

intensify and to capture local features of an input–output connection in the modeling of

nonlinear dusty plasma VDP-ME differential equation.

The innovative contributions of designed hybrid neurocomputing approach with

exploration of different capabilities are presented in terms of following salient

features: -

A new computing paradigm of NAR-RBFs neural network model is designed better for

solving NCDS represented by initial value problem of bi-model VDP-ME.

A class of new transformation are introduced to ensure convergent and reduction in

search time to improve the efficiency smooth, functionality and parametric computations.

The competency of proposed hybrid neural network model is endorsed in terms of

accuracy, stability, fast convergence, less sensitivity and dynamic consistency in

characteristics for variant chaotic systems.

The method’s extendibility provides building of generalized framework in modeling of

higher order ODEs and PDEs solutions with application beyond Nano-technology,

especially in modeling of stiff scenarios.

Section 5.2. describes the dynamics of Van der Pol Mathieu’s Equation. Section 5.3. presents

design methodology of the proposed model. Section 5.4. defines the performance indices

used to measure performance of the model. Statistical analysis of the VDP-ME is discussed

in Section 5.5. Construction of the proposed nonlinear combination model is described in

Section 5.6. Experimental results of the NAR-RBFs model are discussed in section 5.7.

Page 117: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

117

Different scenario’s for VDP-ME are simulated in Section 5.8. Finally, Section 5.9 contains

the summary and conclusion.

For the convenience of readers, the notations used in this paper are summarized in Table

5.1.

5.2 VAN DER POL MATHIEU ’S EQUATION

The section demonstrates the solution of well-known ODEs to implement the proposed

Hybrid NAR-RBFs model. Van der Pol equation is used to assess the performance of the

model, which was proposed by Balthazar Van de Pol [181] a Dutch scientist, which explain

the triode oscillations in electric circuits

2 222

2 2 2

4( )

( )d D

dd y dyy y

dtdt m K

K

K

y q

(5.1)

Where 1 cosd d

q q s t represent analytic tractability changing in time t with

frequency and parameter s for the dust particles. In special case the differential equation

becomes a case of pendulum described as:

2

( ) cos( )g A

f t tL L

(5.2)

whose estimation consist of fundamental system of solutions [182] in the form of power series

in excitation amplitude coefficient, which helps to understand behavior of the design model

particularly in unstable region of dynamic systems. Introducing the value of charge in the Eq

(1) the equation become:

Page 118: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

118

22

2

2

2 2

24( ) (1 cos )

( )

d

d D

d y dyy y s t

dtd

y K

t K

q

m K

(5.3)

Introducing particular frequency as 24

dpdy q and oscillatory frequency as

2 2

pd DK K K the dust particle equation become

22 2

2( ) 0

d y dyy y

dtdt

(5.4)

'(0) 1, (0) 1y y

It is classical example for self-oscillatory dynamic system in which the nonlinear duffing

equations are difficult to solve [183] and model analytically. Different perturbation methods

[184] and numerical techniques including different iteration methods have been used for

nonlinear estimation of the equations.

5.3 DESIGN METHODOLOGY

The sigmoid function act globally [185] while RBFs network with Radial basis transfer function

captures local [186] behavior of small region of input space. when input moves from center of

radii the radial distance in term of neuron decreases and only few RBF functions will be active

and other become close to zero as input data increases while Sigmoid function remain close

to 1 with increase of data theses characteristic of Sigmoid function and RBF makes them

unique to capture global and local architecture respectively in the field of modeling. Overall

graphical abstract of the proposed technique, NAR-RBFs for modeling of VDP-ME differential

Equation is shown in figure 5.1.

Page 119: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

119

5.3.1. DYNAMIC NONLINEAR AUTOREGRESSIVE NEURAL NETWORK

(NAR)

Dynamic nonlinear autoregressive model used for regression, Interpolation and prediction of

discrete time series y(t) at time t, in high variance and sporadic behavior the nonlinear

approach is followed.

A nonlinear autoregressive neural network is discrete model consist of input layer, input

delay, hidden layer, output layer and output delay as shown in figure 5.2 and approximated

as follow:

(k) [x(k),x(k 1),x(k 2),...x(k ),y(k 1),y(k 2)

..., y(k )]

x t

t

(5.5)

o(k) f ( (k)w )I I IN N N

x b (5.6)

2 2 2y(k) f (o(k)w )

N N Nb (5.7)

Here x is input vector of P dimension and y is output vector of Q dimension . o is hidden

layer node vector of N dimension , t is delay order and p is output, bINis threshold of input

layer ,2

bN

is threshold of hidden layer. Where t as an input delay step and number of resultant

delay step is p then output of jth hidden nodes. Connection weight between hidden and delay

layer is symbolized by1N

w ,transfer function for the hidden nodes is 1N

f and activation function

for output nodes is 2N

f . The past p values of data are required to approximate the function

with training of neural network approximation techniques. The NAR model optimizes network

Page 120: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

120

weight and neuron bias in training phase with Levenberg Marquardt algorithm as structure of

NAR Model is shown in figure 5.3.

Propagation of error is done by back propagate the error of outer layer to backward layers to

adjust the error in hidden layers as shown in the figure 5.4. The algorithm consists of two

steps, in first step forward propagation is involved by processing input.

The information from input layer to hidden layer is processed one by one at each unit and in

second stage the reverse process to minimize the error involved.

5.3.2. RADIAL BASIS FUNCTIONS (RBFS)

The one-dimension modeling by radial basis function network represented as follows:

1

( ) ( ) ( ), 1,2,.....,m

ji j

j

y x f x w h t i n

(5.8)

Where w represents output layer weights and y is network output and n are number of

neurons. Radial Basis function neural network consists of three layers, as shown in figure 5.5

in which first layer is input layer, second is hidden layer and third is outer layer.

The transfer functions used in the first layer of the RBF network are different than the sigmoid

functions generally used in the hidden layers of multilayer perceptron (MLP). We will consider

only the Gaussian RBFs as neurons in the hidden layer as activation function. The array of

computing units is represented by hidden nodes of vector c which is parametric vector of

input x vector size. The Euclidean distance between input vector and center c is defined as:

( ) ( )i

d x t c t

(5.9)

Page 121: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

121

The output from hidden layer is produced RBFs nonlinear Gaussian activation function

calculated as:

2

( ) 2

( ) ( )exp( ), 1,2,...

2

i

j t

j

x t c th j m

a

(5.10)

Where ja is a scalar positive width and m represents number of hidden nodes.

Fig 5. 1 Structure of NAR model system

Page 122: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

122

Fig 5. 2 Proposed methodology NAR-RBF-NN for nonlinear dusty plasma models

Page 123: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

136

Fig 5. 3 Structure of NAR neural network model

Fig 5. 4 Back propagation neural network

Fig 5.5 Architecture RBFs Network

The output layer is combination of linear weight described as follows:

1

( ) ( ), 1,2,.....,m

i ji j

j

y t w h t i n

(5.11)

Page 124: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

137

m

i ic is set of centers and m

i ia is set of widths in RBFs modeling. The performance of RBFN

depends on centers and widths of Radial basis functions in the network. The minute variation in

RBFs width makes the response neuron to peak or flat accordingly. The width of ith member is

calculated by the relation

i ia d

Where is a factor , >0 and i

d is distance calculated from nearest center of ith neuron. by

presenting input 1

n

j jx

with n collection points and corresponding required

output 1

n

j jy

.The

1

m

i i

weights calculated by neural network by method of least square. In

Radial basis network, approximation is obtained by converted the given differential equation in

system of lower order derivative and the expression is applied in linear square formulation by

the calculation as follow:

1

1

1

( )( )

( )( )

( )( )

p

p

p

p

p

p

p m

di ip dt

i

p m

di ip dt

i

p mpd

i ip dti

d f tw h t

dt

d f tw h t

dt

d f tw h t

dt

(5.12)

The approximation function f(x) together with its derivative can be expressed in term of RBFs

can express as:

1

1

( ) ( )( )

1

p

p

p p m

di ip p dt

i

d f t d f aw h t

dt dt

(5.13)

1

1

( ) ( )( )

1

xp p mp

i ip pi a

d f t d f aw h t

dt dt

(5.14)

Page 125: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

138

21

2

1 0

( ) ( )( )

1

txp p mp

i ip pi a

d f t d f aw dt h t dt

dt dt

(5.15)

3

2

1 2

1

( ) ( )... ( )

txmt

i pa

i a

df x d f aw dt dt h t dt

dt dt

(5.16)

3

2

1 2

1

( ) ( ) ... ( )

p

k

t txmt

i p ta

i a a a

f x f a w dt dt dt h t dt

(5.17)

The integral over finite interval between the points can be reduced to one-dimension integral

equation by iterated integral as calculated by Abramowitz and Stegun [187].

3

2

12

2 1

1

... ( )

( ( ) )( 1)!

p

k

t txt

p ta

a a a

k

p

a

dt dt dt h t dt

x at h x a x a t dt

k

(5.18)

The accuracy of numerical solution of the ordinary differential equation is measured by norm of

relative error by the formula.

2

2

1

( ) ( ) / ( )n n

e i i i

i i

N y t f x y t

(5.19)

Here ix

is ith test point and n is total test points where f and y are estimated and exact

calculation respectively of the function.

2( )

eN h h

(5.20)

h is center spacing and

is exponential models parameters calculated from least square fit.

Page 126: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

139

In general, time series data consist of linear and the nonlinear parts. The RBFs neural networks

can capture both trends. Here we apprehend local and global features with the help of hybrid

model.

5.4 PERFORMANCE INDICES

In order to evaluate performance of proposed nonlinear combination model, we have used

mean absolute error (MAE), root mean square error (RMSE) and mean absolute percentage

error (MAPE) defined as follows:

1

2

1

1

1

1( )

1100%

N

t t

i

N

t t

t

Nt t

i t

MAE y yN

RMSE y yN

y yMAPE

N y

(5.21)

Explained Variance between Group

2( )

i iSSTR n x x

1

SSTRMSTR

c

(5.22)

Unexplained variance

2( )

ijSST x x

(5.23)

5.4.1 STATISTICAL TEST

Tukey Statistical Criterion is used to analyze the variance based on multiple trial for NAR-RBFs

model for VDP-ME equation.

Page 127: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

140

Tukey Statistical Criterion is defined as.

( , ) ia c n c

MSET q

n

(5.24)

Where

( , )a c n cq =Studentized range distribution based on c and n-c degree pf freedom

c= Number of treatments

MSE= Mean square error from ANOVA table

n= Total sample size

ni =Sample size of treatment group with smallest number of observations

5.5 STATISTICAL ANALYSIS OF VDP-ME

Statistical description of VDP-ME exhibits left as well as right skewed which cannot exactly fit in

normal probability distribution’ as shown in figure 5.6 and Table 5.1.

Table 5.1 Statistical description of VDP-ME equation

Variable count Mean St Dev Minimum

y(x) 501 -0.0027 1.0002 -1.4135

Median Maximum Mode Skewness Kurtosis

0.0039 1.4142 0* 0.01 -1.50

Page 128: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

141

Fig 5.6 Normal probability distribution of VDP-ME

Gaussian Mix model distribution was used to statistically analysis of VDP-ME equation. Bi-

modal distribution model calculated as the best fit for VDP-ME equation as shown in figure 5.7.

The Akaike Information Criterion (AIC) was used to estimate the goodness of fit. The PDF

model with the least AIC value was selected as the best fit. Statistical description with bi-model

Gaussian distribution using Gaussian Mix model for exact solution of bimodal VDP-ME in the

interval [0,50] with two energy state in the interval width by taking t= 0.1 is presented in figures

5.8-5.9 and in the Table .5.2.

Page 129: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

142

Fig 5.7 Bi-model distribution of VDP-ME probability

Fig 5.8 Probability distribution of Bi-model VDP-ME

Table 5.2 Statistical description of Bi-model VDP-ME

Description value

Mixing proportion of component 1 0.56841

Mixing proportion of component 2 0.431519

Mean component 1 -0.4920

Mean component 2 1.234

Page 130: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

143

Gaussian Mix Model AIC 1.25E+03

Gaussian Mix Model BIC 1.27E+03

Gaussian Mix Model tolerance 1.00E-08

5.1 SPECIAL NONLINEAR TRANSFORMATION

Most of researchers uses linear input t directly in machine learning process to paradigm discrete

outcome of dynamic behavior of different systems, governed by stochastic higher differential

equations with chaotic and multi-model behavior. The direct use of linear input for modeling not

only takes additional time to build neural network structure, but also makes the modeling

process more complicated with inclusion of extra number of neurons, weights and layers. The

direct process consumes additional. memory of system, which slow down machine learning

process with unsatisfactory results and low accuracy rate in bimodule stochastic modeling.

In proposed methodology, special transformations are introduced for class of differential

equations, which convert local optimum to global optimum based on probability distribution of

the desired model. The proposed transformation ensures convergent of linear input t by first

converting it into bimodule large domain input form before utilization it into machine learning

process. With input t, transformation vectors T can be defined according to parametric behavior

of desired model as shown in Table 5.3.

Table 5.3 Probability based proposed transformation

Desired behavior Transformation Probability distribution

Highly stochastic,

Nonlinear

4 4T t t

4 4T t t

bimodule

Page 131: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

144

bi-model 4 4 3 3

2 2 1

T t t t t

t t t t

4 4 3 3

2 2 1

T t t t t

t t t t

5.6 PROPOSED HYBRID RBF-NAR MODEL

Thus, we can express the proposed model as follows:

( ) ( )i i i

x L x N x (5.25)

Where ( )i

N x is nonlinear neural network based on nonlinear auto regressive (NAR) dynamic

neural network model at time t while ( )i

L x represents Radial basis functions (RBFs) data

modeling at time t, ( )i

N x model global features well with the help of Gaussian transfer function

using few numbers of neurons with sigmoid transfer function, while dynamics features of RBFs

in ( )i

L x model particular local trend. The proposed hybrid model fit global as well as local

features of nonlinear chaotic time series.

Bimodal PDF distribution of VDP-ME

Page 132: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

145

CDF distribution of VDP-ME

PDF distribution of VDP-ME

Fig 5.9 CDF & PDF distribution of bimodal VDP-ME

5.7 EXPERIMENTAL RESULTS

In this section, we have applied the hybrid model. Many dynamic systems require exact or

approximate model for their dynamic systems. We have used the Runge–Kutta method to

obtain the numerical solution of the ODE before modeling with deep artificial intelligence

techniques. Parametric method for chaotic nonlinear dynamic system fails to obtain exact

solution. To model the discrete solution of Mathieu’s Van der Pol differential equation with

autoregressive neural network total 1000 value of y(x) after calculating from RK4 method using

Matlab software are used in the specific interval, 70% of the data are used for training 15% for

testing and remaining 15% for validating the results. The architecture of NAR consists of five

neurons in the input layer, one hidden layer, an output layer as shown in figure 5.10.

Page 133: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

146

Fig 5.10 Structure of proposed NAR Model

Training testing and validation response time scaled errors are shown in figures 5.11-5.12 and

Table 5.4.

Fig 5.11 NAR model response dusty plasma equation

Page 134: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

147

Fig 5.12 Training. testing validation of NAR model dusty plasma Equation.

Table 5.4 NAR model performance n=1000 and d = 1

Index MSE R Values

Training 5.63E-09 1 70%

Validation 6.07E-09 1 15%

Testing 1.01E-09 1 15%

Epoch 1000

Time 00.00.03

Performance 2.73E-05

Gradient 1.07E-07

MU 1.00E-15

Validation Check 6

Page 135: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

148

Best validation performance is found at 1000 epoch with RMSE of 1.11E-05 with residual as

shown in Figure 5.13

The residual of NAR model is transferred to RBFs neural network for local search with neural

network radial basis architecture which consists of one hidden layer with 16 neurons used for

training the data set with Gaussian radial basis transfer function and one outer layer for output

result as shown in Figure 5.14.

Fig 5.13 Best Validation error of NAR model Equation

Fig 5.14 Residual of NAR model fit

Structure of. proposed RBFs neural network model is shown in Figure 5.15.

Page 136: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

149

Fig 5.15 Structure of RBFs neural network model

Best performance of RBFs neural network is calculated at 1000 epoch with RMSE of 2.007E-08

is shown in figure 5.16. The training and validation performance is shown in figure 5.17 with

validation performance of 1.6196E-09. The residual plot of hybrid model is shown in figure 5.18.

Fig 5.16 Best validation performance of RBFs model

Fig 5.17 Training, validation and test error of RBFs model

Page 137: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

150

Fig 5. 18 Model Residual plot of NAR-RBFs neural network

8.8 VDP-ME MODEL SCENARIOS

Results of VDP-ME dust plasma modeling in different scenario and their simulation in diverse

time domain and their variation under different condition are presented in this section.

Scenario 1: VDP-ME by varying rate of charged dust gain α production

The system model of Eq no (1) in the interval [0.50] with value of angular velocity =1 and

=0.0001 is describe as:

( ) ( )i i i

x L x N x (5.25)

22

2( 0.0001 ) 0

d y dyy y

dtdt

, (0) 1, (0) 1y y

(5.26)

The 03 Cases of scenario 1 are formulated with =0.2, =0.3 and =0.6 respectively.

Scenario 2: VDP-ME by varying rate of charged dust gain loss

The system model of Eq no (1) in the interval [0.50] with value of angular velocity =1 is given

by:

Page 138: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

151

22

2( ) 0

d y dyy y

dtdt

, (0) 1, (0) 1y y (5.27)

03 Cases of scenario 2 are formulated with =0.01 & =0.01, =0.01 & =0.1, =1 &

=0.001, respectively.

Scenario 3: VDP-ME by changing both charged dust gain α production & charged dust gain loss

.

The system model of Eq no (1) in the interval [0.50] with value of angular velocity =1 is given

by:

22

2( ) 0

d y dyy y

dtdt

, (0) 1, (0) 1y y

(5.28)

03 Cases of scenario 3 are formulated with =0.2 & . =0.4, =0.6 & =0.5, =0.9 &

=0.6 respectively.

Scenario 4: VDP-ME by changing both charged dust gain α production & charged dust gain

loss in the larger domain.

The system model of Eq no (1) in the interval [0.100] and in the interval [0.200] with value of

angular velocity =1 is given by:

22

2( ) 0

d y dyy y

dtdt

, (0) 1, (0) 1y y

(5.29)

Two cases of scenario 4 are formulated with = 0.01 and = 0.01, = 0.01 and = 0.1,

respectively. and in third Case of scenario 4 with = 0.02 and = 0.001 and increase

Page 139: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

152

another layer of RBFs neural network in NAR-RBFs modeling to assess the performance of

deep learning in Case of VDP-ME differential equation .

5.8.1. ANALYSIS OF MULTIPLE INDEPENDENT TRAIL

To evaluate the performance of proposed NAR-RBFs model convergent, stability and

consistency analysis under different trial for VDP-ME differential equation are presented. The

value of performance indices based on MAE, RMSE are used with different value of time

domain for convergent analysis of the proposed model are discussed.

We have conducted three independent trial in the interval [0,50] for the scenario 1 (Case1) to

assess variance of sample, based on mean value obtained from simulation of NAR-RBFs

model. we have selected 03 samples with sample size of 501 each and calculated explained

variance from the independent samples and unexplained variance from population of size 1503

with 1502 degree of freedom. We have selected significant value of = 0.00001

Statistical result in Table 5.5 and Table 5.6 depicts that, we have obtained F = 0 < for

unexplained variance with P-value of 1 with accuracy of 99.99999. we have compared all three

sample by analysis of variance and Tukey simulation test result shown in Table 5.7 and we

accept Null hypothesis concluding all three sample have equal means and hence representing

same models

Hypothesis:

Null hypothesis All means are equal

Alternative hypothesis Not all means are equal

Significance level α = 0.0000100000

Page 140: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

153

Table 5.5 Analysis of Variance (one-way ANOVA)

Variance DF SSE MSE F-Value P-Value

Explained 2 0.00 0.00000 0.00 1.000

Unexplained 1500 1500.54 1.00036

Total 1502 1500.54

Table 5.6 Grouping Information Using the Tukey Method

Factor N Mean SD Group CI 99.99999%

Simulation 1 501 0.00269 1.0002 A (-0.2008, 0.1954)

Simulation 2 501 0.00269 1.0002 A (-0.2008, 0.1954)

Simulation 3 501 0.00269 1.0002 A (-0.2008, 0.1954)

Table 5.7 Tukey Simultaneous Tests for Differences of Means

Difference of Levels Difference

of Means

SE of

Difference T-Value

Adjusted

P-Value

Simulations (1,2) 0.0000 0.0632 0.00 1.000

Simulations (1,3) 0.0000 0.0632 0.00 1.000

Page 141: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

154

Simulations (2, 3) 0.0000 0.0632 0.00 1.000

Comparison of result modeled by NAR-RBFs, their MAE, CDF and PDF with their exact solution

for Scenario 1 and scenario 2 are plotted graphically in figure 5.19. Comparison of result based

on independent trial for Scenario 1: Case1 are obtained with MAE of 10E-16,10E-16 and 10E-

18 respectively. NAR-RBFs model results for scenario 3 and scenario 4 are shown graphical

with their MAE and RMSE in figure 5.20 and figure 5.21 respectively. Comparison of result

obtained from NAR- RBFs models with exact solution scenario 4 case 3 is shown in figure 5.22.

The accuracy of proposed NAR-RBFs can be increased by additionally modeling NAR-RBFs

error with another RBFs layer after transformation as shown in figure 5.22 and Table 5.8

Table 5.8 NAR-RBFs Model for Scenario 4 Case 3 for VDP-ME

t y(t) NAR-RBFS AE RMSE

0 1 1 1.30E-20 1.69E-40

0.1 1.0948 1.0948 4.04E-20 1.63E-39

0.2 1.1787 1.1787 1.11E-19 1.22E-38

0.3 1.2509 1.2509 3.35E-20 1.12E-39

0.4 1.3105 1.3105 6.90E-20 4.76E-39

0.5 1.357 1.357 8.45E-20 7.14E-39

0.6 1.39 1.39 3.75E-20 1.40E-39

0.7 1.4091 1.4091 2.78E-20 7.73E-40

Page 142: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

155

0.8 1.4142 1.4142 5.17E-24 2.67E-47

0.9 1.4051 1.4051 6.52E-20 4.25E-39

1 1.382 1.382 8.24E-20 6.79E-39

1.1 1.345 1.345 9.71E-20 9.42E-39

1.2 1.2945 1.2945 3.52E-20 1.24E-39

1.3 1.231 1.231 5.05E-20 2.55E-39

1.4 1.1553 1.1553 1.18E-20 1.40E-40

1.5 1.0681 1.0681 6.81E-20 4.63E-39

1.6 0.9702 0.9702 5.33E-20 2.85E-39

1.7 0.8626 0.8626 4.97E-20 2.47E-39

1.8 0.7465 0.7465 1.16E-19 1.35E-38

1.9 0.6228 0.6228 2.45E-20 6.01E-40

2 0.4929 0.4929 4.78E-19 2.29E-37

Probability distribution of NAR-RBFs for the scenario 1 to scenario 3 are shown in figure 5.23-

5.24. Parametric change for all three scenarios, respectively, Comparison of PDF & CDF for

proposed NAR-RBF Model with exact solution for scenario 4 case 1 and case 2 are graphical

Page 143: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

156

shown in figure 5.25. Parametric comparison of NAR-RBFs models with actual results are

shown in figure 5.26 for scenario 1-3

The comparison NAR-RBFs Model for different value of α & β are shown in figure 5.27. Moment

Analysis using mean, standard deviation and skewness and Kurtosis are expressed in Table

5.9 & Table 5.10. Convergent analysis for different value of width h are expressed in Table 5.11,

which shows that for small value of h the MAE and RMSE decline

Table 5.9 Moments analysis (St- Dev & Variance) for the proposed VDP-ME

Variable Count Mean St Dev Variance Skewness

Scenario 1 Case 1 501 0 1.0002 1.0004 0.006

Scenario 1 Case 2 501 0 1.0002 1.0004 0.006

Scenario 1 Case 3 501 0 1.0002 1.0004 0.006

Scenario 2 Case 1 501 -0.01 1.3061 1.706 -0.001

Scenario 2 Case 2 501 0 0.9975 0.995 0.006

Scenario 2 Case 3 501 0.0009 0.849 0.7209 0.018

Scenario 3 Case 1 501 -0.01 0.9961 0.9921 0.009

Scenario 3 Case 2 501 -0.03 1.5345 2.3546 0.019

Scenario 3 Case 3 501 0 1.7318 2.9991 -0.021

Scenario 4 Case 1 1001 -0.01 1.1117 1.236 0.004

Page 144: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

157

Scenario 4 Case 2 501 0 1 1 0.007

Scenario 4 Case 3 501 -0.01 1 1.0004 0.006

which also confirms the convergence of proposed model.

Table 5.10 Moments analysis (Kurtosis & RMSE) for the proposed VDP-ME

Variable Min Max Kurtosis MAE RMSE

Scenario 1 Case 1 -1.4135 1.414 -1.5 2.02E-10 8.7E-20

Scenario 1 Case 2 -1.4135 1.414 -1.5 4.3E-17 6.07E-29

Scenario 1 Case 3 -1.4135 1.414 -1.5 2.5E-10 8.74E-20

Scenario 2 Case 1 -1.993 1.99 -1.46 1.87E-14 9.64E-28

Scenario 2 Case 2 -1.4161 1.416 -1.5 8.89E-15 2.02E-28

Scenario 2 Case 3 -1.2947 1.362 -1.49 7.36E-15 1.63E-27

Scenario 3 Case 1 -1.413 1.413 -1.5 1.22E-15 7.01E-27

Scenario 3 Case 2 -2.2038 2.197 -1.51 1.4E-14 8.21E-26

Scenario 3 Case 3 -2.4678 2.466 -1.52 3.89E-14 3.07E-27

Page 145: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

158

Scenario 4 Case 1 -1.7006 1.694 -1.49 1.58E-13 5.81E-25

Scenario 4 Case 2 -1.4135 1.414 -1.5 1.72E-12 2.95E-24

Scenario 4 Case 3 -1.4135 1.414 -1.5 1.51E-19 8.98E-38

(a) Scenario 1 Case 1 (b) AE- Scenario 1 Case 1

(c) MSE- Scenario 1 Case 1 (d) Scenario 1 Case 2

Page 146: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

159

(e) AE- Scenario 1 Case 2 (f) MSE- Scenario 1 Case 2

(g) Scenario 1 Case 1 (h) AE- Scenario 1 Case 1

(i) MSE- Scenario 1 Case 2 (j) Scenario 2 Case 1

Page 147: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

160

(k) AE- Scenario 2 Case 1 l) MSE- Scenario 2 Case 1

(m) Scenario 2 Case 1 (n) AE- Scenario 2 Case 1

(o) MSE- Scenario 2 Case 1 (p) Scenario 2 Case 1

Page 148: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

161

(r) AE- Scenario 2 Case 1 (s) MSE- Scenario 2 Case 1

Fig 5.19 Comparison of result obtained from NAR- RBFs models with Exact for Scenario 1 and

2

(a) Scenario 3 Case 1 (b) AE- Scenario 3 Case 1 (c) MSE- Scenario 3 Case 1

Page 149: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

162

(d) Scenario 3 Case 2 (e) AE- Scenario 3 Case 2 (f) MSE- Scenario 3 Case 2

(g) Scenario 3 Case 1 (h) AE- Scenario 3 Case 1 (i) MSE- Scenario 3 Case 1

Fig 5. 20 Comparison of result obtained from NAR- RBFs models with Exact solution Scenario

3

Table 5. 11 Convergence analysis of proposed NAR-RBFs Models

RMSE analysis of RBFs-NAR with different step size

xn

RMSE

h=0.75 h=0.1875 h=0.1 h=0.0.5 h=0.0075

0.75 2.17E-19 1.17E-19 8.74E-20 3.84E-24 7.01E-27

1.5 4.36E-19 5.43E-20 8.21E-26 1.63E-26 -9.64E-28

3 1.26E-19 1.16E-19 5.81E-25 5.71E-27 2.02E-28

4.5 7.41E-19 6.56E-19 2.95E-24 2.15E-25 1.63E-27

Page 150: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

163

6 3.56E-19 6.43E-20 8.07E-25 8.07E-28 2.98E-29

7.5 1.75E-19 2.43E-20 2.95E-24 9.65E-26 8.21E-29

t

(a) Scenario 4 Case 1 (b) AE- Scanrio 4 Case 1 (c) MSE- Scnario 4 Case 1

(d) AE Scenario 4 Case 2 (e) MSE Scenario 4 Case 2

Page 151: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

164

(f) Scenario 4 Case 2

Fig 5. 21 Comparison of result obtained from NAR- RBFs models with Exact solution Scenario 4 Case 2

(a) Scenario 4 Case 3 (b) AE- Scenario 4 Case 3 (c) MSE- Scenario 4 Case 3

Fig 5. 22 Comparison of result obtained from NAR- RBFs models with Exact solution Scenario

4 Case 2

Page 152: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

165

(a) CDF Scenario 1 Case 1 (b) PDF: Scenario 1 Case 1

(c) Paraetric Plot:-Scenario

1 Case 1

d) CDF Scenario 1 Case 2

(e) PDF: Scenario 1 Case 2 (f) Paraetric Plot:-Scenario

1 Case 2

Page 153: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

166

(g) CDF Scenario 1 Case 3 (h) PDF: Scenario 1 Case 3

(i) MSE- Scenario 1 Case 2 (j) CDF Scenario 2 Case 1

(k) PDF: Scenario 2 Case 1 (l) Paraetric Plot:-Scenario

2 Case 1

Page 154: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

167

(m) CDF Scenario 2 Case 2 (n) PDF: Scenario 2 Case 2

(o) Paraetric Plot:-Scenario

2 Case 2

(p) CDF Scenario 2 Case 3

Fig 5. 23 Comparison of CDF& PDF for proposed NAR-RBFs Model with exact solution for

Scenario 1

Page 155: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

168

(a) CDF Scenario 3 Case 1 (b) PDF Scenario 3 Case 1 (c) PDF Scenario 3 Case 1

(d) CDF Scenario 3 Case 2 (e) PDF Scenario 3 Case 2 (f )PDF Scenario 3 Case 2

Page 156: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

169

(g) CDF Scenario 3 Case 3 (h) PDF Scenario 3 Case 3 (i) PDF Scenario 3 Case 3

Fig 5. 24 Comparison of CDF& PDF for proposed NAR-RBFs Model with exact solution for

Scenario 3

(a) CDF Scenario 4 Case 1 (b) Scenario 4 Case 1 (c) PDF Scenario 4

Case 2

(d)

PDF

Scenar

io 4

Case 1

Page 157: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

170

(e)

CDF

Scenar

io 4

Case 1

(f) CDF

Scenar

io 4

Case 2

(g) CDF Scenario 4 Case 3 (h) PDF Scenario 4 Case 3 (i) PDF Scenario 4 Case 3

Fig 5. 25 Comparison of CDF& PDF for proposed NAR-RBFs Model with exact solution for

Scenario 3

Page 158: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

171

(a) Scenario 1: Case 1 (b) Scenario 1: Case 2 (c) Scenario 1: Case 3

(d) Scenario 2: Case 1 (e) Scenario 2: Case 2 (f) Scenario 2: Case 3

Page 159: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

172

(g) PDF Scenario 4 Case 1 (h) Scenario 4 Case 2 (i) Scenario 4 Case 3

(j) Scenario 3: Case 1 (k) Scenario 3: Case 2 (l) Scenario 3: Case 3

Fig 5. 26 Parametric Comparison proposed NAR-RBF Model with exact solution for Scenarios

1-4

(a) Parametric variation for α

Page 160: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

173

(b) Parametric variation for α and β

(b) Parametric variation for α and β

Fig 5. 27 Parametric Comparison proposed NAR-RBF Model with exact solution for Scenarios 2

and 3.

Page 161: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

174

5.9. Conclusion

In This paper, hybrid-RBFs approach to model stochastic nonlinear dusty plasma VDP-ME

differential equation in predefined domain of input space is presented. Different experiments for

hybrid model are conducted for the training and testing processes of nonlinear autoregressive

neural network modeling with combination of Nonlinear Autoregressive with Radial basis

function approximation paradigm. A hybrid parametric identification approach for nonlinear

modeling of differential equation representing the dynamics of dust grain charge production and

lost in dusty plasma systems has shown remarkable performance with reasonable accuracy as

well as convergence.

In future one may implement the proposed Design of hybrid NAR-RBFs neural model as an

alternate computing paradigm for the solution of astrophysics [188], atomic physics [189],

nonlinear optics [190], random matrix theory [191], energy [192], bioinformatics [193-195],

controls [196] and signal processing [197] models.

Page 162: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

175

CHAPTER 6

SUMMARY AND CONCLUSION

_______________________________________________

This Chapter is devoted to a brief summary and conclusion of research work. In addition,

we provide further related studies for reader interest.

6.1 SUMMARY

The proposed study with applications is presented in seven chapters of the dissertation. The

chapter 1 is devoted to the historical brief, importance, problem statement and objective of

proposed methodologies based on artificial intelligence solvers. In chapter 2, we provided

literature review and some basic concepts, preliminaries and neural network methodology. In

chapter 3, Neuro-fuzzy model is presented to predict summer precipitation for different

meteorological stations. Five meteorological stations of Sindh, Pakistan are selected. Data of

annual and monthly rainfall amount of summer (June to August) for a period of 42 years is

analyzed to show their relationships with the annual rainfall pattern. Three input values of

monthly mean rainfall are added as exogenous input to improve forecasting results. The

sigmoid function is used with the hidden layer neurons as an activation function. The network is

trained using the Levenberg-Marquardt as Back-propagation algorithm through time (BPTT) in

epoch wise mode. The study finds that the NARX model produces better forecasting and faster

convergence as compared to ARIMA. Neural Network has proved as the most suitable

technique for predicting different climate conditions. In Chapter 4, fractional neuro-sequential

paradigm is presented for parametrization modeling of stock exchange variables with hybrid

ARFIMA-LSTM. The ARFIMA model is applied to filter linear tendencies in the data and residual

of ARFIMA is modelled with LSTM using additionally exogenous dependent variables. The

developed model is evaluated using PSX company data of stock market with RMSE, MSE and

MAPE. The result is also compared with generalized regression radial basis neural network. In

Page 163: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

176

Chapter 6, hybrid NAR-RBFs neural network is designed for dynamical analysis of nonlinear

dusty plasma system. The developed NAR-RBFs neural network model is implemented on bi-

model NCDS represented with Van der Pol-Methiew equation for different scenarios based on

variation in dust gain production/loss for both small/large time domains. Statistical tests are

used to analyze the variance based on multiple trial for NAR-RBFs model for VDP-ME. The

proposed hybrid model trained in two-step training procedure. In the first step, MLP with

nonlinear transfer functions in NAR dynamical neural network, which model global features

while in second step, the Gaussian radial basis function neural network is used to identify,

intensify and capture local features of an input–output connection in the modeling of nonlinear

dusty plasma based on VDP-ME. A class of new transformation is also introduced to ensure

better convergence with reduced computations time, accordingly we have improved the

efficiency, functionality, stability, robustness and parametric computations of the models. In

chapter 6, a brief summery and concluding remarks are listed.

6.2 CONCLUSION

Three different bio-inspired computing paradigm NARX, NAR-RBF, ARFIMA-LSTM based on

hybrid neurocomputing approaches are presented for accurate and reliable results for modeling

of nonlinear systems.

The neuro-fuzzy modeling based on NARX is designed and viably implemented for participation

of rainfall prediction for different datasets of five metrological stations in Sind, Pakistan. The

hybrid NAR-RBFs approach is utilized to model stochastic nonlinear dusty plasma represented

with VDP-ME in predefined domain of inputs both for large and small intervals. Different

experiments for hybrid model are conducted for the training and testing processes for nonlinear

modeling of differential equations representing the dynamics of dust grain charge production

and loss in dusty plasma systems. The outcomes of proposed scheme have shown remarkable

performance with reasonable accuracy as well as convergence. The hybrid models handle

nonlinear tendencies better than individual models. The fractional hybrid paradigm, provide

Page 164: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

177

flexible tool for classes of long-memory model. The proposed hybrid ARFIMA-LSTM model

outperforms the individual models for each scenario of stock exchange datasets. The superior

performance of the proposed hybrid model significantly enhanced the best parameterization

and convergence for better financial series prediction with exogenous inputs The developed

transformation contributed in modeling process in terms of reducing the search time, improved

efficiency and better accuracy. The hybrid methodology can be extended and generalized to

build framework in modeling of higher order stiff differential Equations solutions.

In the future, one may implement the proposed design of hybrid neurocomputing models NARX,

NAR-RBFs, ARFIMA-LSTM as an alternate, accurate, reliable and stable computing paradigm

for the solution of fluid dynamics, nanotechnology, circuit theory, combustion theory,

astrophysics, atomic physics, plasma physics, nonlinear optics, random matrix theory, energy,

bioinformatics, financial mathematics, economic, control, signal processing and communication

models.

Page 165: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

178

REFERENCES

1. Cohen, J., 2004. Bioinformatics—an introduction for computer scientists. ACM

Computing Surveys (CSUR), 36(2), pp.122-158.

2. Blum, C. and Roli, A., 2003. Metaheuristics in combinatorial optimization: Overview and

conceptual comparison. ACM computing surveys (CSUR), 35(3), pp.268-308.

3. Raja et.al 2017. Design of bio-inspired heuristic technique integrated with interior-point

algorithm to analyze the dynamics of heartbeat model. Applied Soft Computing, 52,

pp.605-629.

4. Ahmad et.al, 2017. Neural network methods to solve the Lane–Emden type equations

arising in thermodynamic studies of the spherical gas cloud model. Neural Computing

and Applications, 28(1), pp.929-944.

5. Sabir et.al, 2018. Neuro-heuristics for nonlinear singular Thomas-Fermi

systems. Applied Soft Computing, 65, pp.152-169.

6. Ahmad et.al, “Neuro-evolutionary computing paradigm for Painlevé equation-II in

nonlinear optics,” The European Physical Journal Plus, vol. 133, no. 5, pp.184, 2018.

7. Raja et.al, 2018. A new stochastic computing paradigm for nonlinear Painlevé II

systems in applications of random matrix theory. The European Physical Journal

Plus, 133(7), p.254.

8. Jamal et.al, 2019. Hybrid Bio-Inspired Computational Heuristic Paradigm for Integrated

Load Dispatch Problems Involving Stochastic Wind. Energies, 12(13), p.2568

Page 166: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

179

9. . Raja et.al 2018. A new stochastic computing paradigm for the dynamics of nonlinear

singular heat conduction model of the human head. The European Physical Journal

Plus, 133(9), p.364.

10. Khan, W.U., Ye, Z., Chaudhary, N.I. and Raja, M.A.Z., 2018. Backtracking search

integrated with sequential quadratic programming for nonlinear active noise control

systems. Applied Soft Computing, 73, pp.666-683.

11. Mehmood et.al, 2018. Parameter estimation for Hammerstein control autoregressive

systems using differential evolution. Signal, Image and Video Processing, 12(8),

pp.1603-1610.

12. Piccinini, G., 2004. The First computational theory of mind and brain: a close look at

mcculloch and pitts's “logical calculus of ideas immanent in nervous

activity”. Synthese, 141(2), pp.175-215.

13. Hebb, D.O., 1955. Drives and the CNS (conceptual nervous system). Psychological

review, 62(4), p.243.

14. Dreux et.al, 2015. Biochemical analysis of ascites fluid as an aid to etiological

diagnosis: a series of 100 cases of nonimmune fetal ascites. Prenatal Diagnosis, 35(3),

pp.214-220.

15. Zadeh, L.A., Klir, G.J. and Yuan, B., 1996. Fuzzy sets, fuzzy logic, and fuzzy systems:

selected papers (Vol. 6). World Scientific.

16. Ostergaard, J.J. and Gadeberg, K., FL Smidth and Co A/B, 1990. Method of controlling

a rotary kiln during start-up. U.S. Patent 4,910,684.

17. Ditto, W. and Munakata, T., 1995. Principles and applications of chaotic

systems. Communications of the ACM, 38(11), pp.96-102.

Page 167: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

180

18. Rajagopalan et.al, 2003. Development of fuzzy logic and neural network control and

advanced emissions modeling for parallel hybrid vehicles (No. NREL/SR-540-32919).

National Renewable Energy Lab., Golden, CO.(US).

19. Wazwaz et.al, 2005. The sine–cosine methods for compact and noncompact solutions

of the nonlinear Klein–Gordon equation. Applied Mathematics and Computation,

167(2), pp.1179-1195.

20. Hosseini et.al, 2017. Modified Kudryashov method for solving the conformable time-

fractional Klein–Gordon equations with quadratic and cubic nonlinearities. Optik, 130,

pp.737-742.

21. Chan, R.T. and Hubbert, S., 2010. A numerical study of radial basis function based

methods for options pricing under the one dimension jump-diffusion model. arXiv

preprint arXiv:1011.5650.

22. Nyoni, T., 2018. Modeling and Forecasting Naira/USD Exchange Rate In Nigeria: a

Box-Jenkins ARIMA approach.

23. Fentis et.al, 2019. Short-term nonlinear autoregressive photovoltaic power forecasting

using statistical learning approaches and in-situ observations. International Journal of

Energy and Environmental Engineering, 10(2), pp.189-206.

24. Vazquez et.al, 2018. Fractional calculus as a modeling framework. Monografias

Matematicas Garcia de Galdean, 41, pp.187-197.

25. Evans et.al, 2017. Applications of fractional calculus in solving Abel-type integral

equations: Surface–volume reaction problem. Computers & mathematics with

applications, 73(6), pp.1346-1362.

26. Song, L., 2018. A Semianalytical Solution of the Fractional Derivative Model and Its

Application in Financial Market. Complexity, 2018.

Page 168: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

181

27. AboBakr et.al, 2017. Experimental comparison of integer/fractional-order electrical

models of plant. AEU-International Journal of Electronics and Communications, 80,

pp.1-9.

28. Singh et.al, 2018. A fractional epidemiological model for computer viruses pertaining to

a new fractional derivative. Applied Mathematics and Computation, 316, pp.504-515.

29. Gholami et.al, 2019. Fractional pseudospectral integration/differentiation matrix and

fractional differential equations. Applied Mathematics and Computation, 343, pp.314-

327.

30. Caputo, M. and Fabrizio, M., 2017. The kernel of the distributed order fractional

derivatives with an application to complex materials. Fractal and Fractional, 1(1), p.13.

31. Bahaa, G.M. and Atangana, A., 2019. Necessary and Sufficient Optimality Conditions

for Fractional Problems Involving Atangana–Baleanu’s Derivatives. In Fractional

Derivatives with Mittag-Leffler Kernel (pp. 13-33). Springer, Cham.

32. Abdeljawad et.al, 2019. On a more general fractional integration by parts formulae and

applications. Physica A: Statistical Mechanics and its Applications, 536, p.122494.

33. Petráš, I. and Terpák, J., 2019. Fractional calculus as a simple tool for modeling and

analysis of long memory process in industry. Mathematics, 7(6), p.511.

34. Wu, S., 2019. Nonlinear information data mining based on time series for fractional

differential operators. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(1),

p.013114.

35. Liu, K., Chen, Y. and Zhang, X., 2017. An application of the seasonal fractional arima

model to the semiconductor manufacturing. IFAC-PapersOnLine, 50(1), pp.8097-8102.

Page 169: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

182

36. Liu, K., Zhang, X. and Chen, Y., 2017, August. An evaluation of ARFIMA programs. In

ASME 2017 International Design Engineering Technical Conferences and Computers

and Information in Engineering Conference. American Society of Mechanical Engineers

Digital Collection.

37. Sengupta et.al, 2018. From von Neumann Architecture and Atanasoffs ABC to Neuro-

Morphic Computation and Kasabov’s NeuCube: Principles and Implementations. In

Learning Systems: From Theory to Practice (pp. 1-28). Springer, Cham.

38. Choi, H.K., 2018. Stock price correlation coefficient prediction with ARIMA-LSTM hybrid

model. arXiv preprint arXiv:1808.01560.

39. Song et.al, A globally enhanced general regression neural network for on-line multiple

emissions prediction of utility boiler. Knowledge-Based Systems, 118, pp.4-14.

40. Prusov et.al, 2019. Atmospheric Processes in Urban Area Elements. Cybernetics and

Systems Analysis, 55(1), pp.90-108.

41. Fang et.al, 2019. Natural disasters, climate change, and their impact on inclusive

wealth in G20 countries. Environmental Science and Pollution Research, 26(2),

pp.1455-1463.

42. Rahman et.al, 2017. Analysis and prediction of rainfall trends over Bangladesh using

Mann–Kendall, Spearman’s rho tests and ARIMA model. Meteorology and Atmospheric

Physics, 129(4), pp.409-424.

43. Bari et.al, Forecasting monthly precipitation in Sylhet city using ARIMA model. Civil and

Environmental Research, 7(1), pp.69-77.

44. Danladi et.al, 2018. Assessing the influence of weather parameters on rainfall to

forecast river discharge based on short-term. Alexandria Engineering Journal, 57(2),

pp.1157-1162.

Page 170: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

183

45. Kuok, K.K., Kueh, S.M. and Chiu, P.C., 2019. Bat optimisation neural networks for

rainfall forecasting: Case study for Kuching city. Journal of Water and Climate

Change, 10(3), pp.569-579.

46. Mehr et.al, 2019. A hybrid support vector regression–firefly model for monthly rainfall

forecasting. International journal of environmental science and technology, 16(1),

pp.335-346.

47. Wu, T., Min, J. and Wu, S., 2019. A comparison of the rainfall forecasting skills of the

WRF ensemble forecasting system using SPCPT and other cumulus parameterization

error representation schemes. Atmospheric research, 218, pp.160-175.

48. Kuwajima et.al, 2019. Climate change, water-related disasters, flood control and

rainfa,ll forecasting: a case study of the São Francisco River, Brazil. Geological Society,

London, Special Publications, 488, pp.SP488-2018.

49. Esteves et.al, 2019. Rainfall prediction methodology with binary multilayer perceptron

neural networks. Climate Dynamics, 52(3-4), pp.2319-2331.

50. Ebtehaj et.al, 2018. A new hybrid decision tree method based on two artificial neural

networks for predicting sediment transport in clean pipes. Alexandria engineering

journal, 57(3), pp.1783-1795.

51. Hammid et.al, 2018. Prediction of small hydropower plant power production in Himreen

Lake dam (HLD) using artificial neural network. Alexandria engineering journal, 57(1),

pp.211-221.

52. Hatata, A.Y. and Eladawy, M., 2018. Prediction of the true harmonic current

contribution of nonlinear loads using NARX neural network. Alexandria engineering

journal, 57(3), pp.1509-1518.

Page 171: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

184

53. Ahmad, I., et.al,2019. Design of computational intelligent procedure for thermal analysis

of porous fin model. Chinese Journal of Physics, 59, pp.641-655.

54. Entchev et.al, 2018. Energy, economic and environmental performance simulation of a

hybrid renewable microgeneration system with neural network predictive

control. Alexandria engineering journal, 57(1), pp.455-473.

55. Ahmad, I., et.al,2016. Bio-inspired computational heuristics to study Lane–Emden

systems arising in astrophysics model. SpringerPlus, 5(1), p.1866.

56. Raja et.al, 2018. Design of artificial neural network models optimized with sequential

quadratic programming to study the dynamics of nonlinear Troesch’s problem arising in

plasma physics. Neural Computing and Applications, 29(6), pp.83-109.

57. Sabir, Z., et.al,2018. Neuro-heuristics for nonlinear singular Thomas-Fermi

systems. Applied Soft Computing, 65, pp.152-169.

58. Ahmad, I., et.al,2018. Neuro-evolutionary computing paradigm for Painlevé equation-II

in nonlinear optics. The European Physical Journal Plus, 133(5), p.184.

59. Ahmad, I., et.al,2017. Neural network methods to solve the Lane–Emden type

equations arising in thermodynamic studies of the spherical gas cloud model. Neural

Computing and Applications, 28(1), pp.929-944.

60. Khan, J.A., et.al,2015. Nature-inspired computing approach for solving non-linear

singular Emden–Fowler problem arising in electromagnetic theory. Connection

Science, 27(4), pp.377-396.

61. Mehmood, A., et.al,2018. Design of neuro-computing paradigms for nonlinear

nanofluidic systems of MHD Jeffery–Hamel flow. Journal of the Taiwan Institute of

Chemical Engineers, 91, pp.57-85.

Page 172: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

185

62. Raja et.al, 2018. Bio-inspired computational heuristics for Sisko fluid flow and heat

transfer models. Applied Soft Computing, 71, pp.622-648.

63. Ahmad et.al,2018. Intelligent computing to solve fifth-order boundary value problem

arising in induction motor models. Neural Computing and Applications, 29(7), pp.449-

466.

64. Raja et.al, 2017. An intelligent computing technique to analyze the vibrational dynamics

of rotating electrical machine. Neurocomputing, 219, pp.280-299.

65. Raja et.al, 2018. A new stochastic computing paradigm for the dynamics of nonlinear

singular heat conduction model of the human head. The European Physical Journal

Plus, 133(9), p.364.

66. El-Shafie et.al, 2011. Adaptive neuro-fuzzy inference system based model for rainfall

forecasting in Klang River, Malaysia. International Journal of Physical Sciences, 6(12),

pp.2875-2888.

67. Scher, S. and Messori, G., 2019. How Global Warming Changes the Difficulty of

Synoptic Weather Forecasting. Geophysical Research Letters, 46(5), pp.2931-2939.

68. Ohia, M. and Sugimoto, S., 2019. Differences in climate change impacts between

weather patterns: possible effects on spatial heterogeneous changes in future extreme

rainfall. Climate Dynamics, 52(7-8), pp.4177-4191.

69. Gilleland et.al, 2019. Verification of Meteorological Forecasts for Hydrological

Applications. Handbook of Hydrometeorological Ensemble Forecasting, pp.923-951.

70. Al Balasmeh et.al, 2019. Trend analysis and ARIMA modeling for forecasting

precipitation pattern in Wadi Shueib catchment area in Jordan. Arabian Journal of

Geosciences, 12(2), p.27.

Page 173: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

186

71. Tseng et.al, 2019. Forecasting the seasonal pollen index by using a hidden Markov

model combining meteorological and biological factors. Science of The Total

Environment, p.134246.

72. Asadi et.al, 2019. Rainfall-runoff modelling using hydrological connectivity index and

artificial neural network approach. Water, 11(2), p.212.

73. Nath et.al, 2019. Runoff estimation using modified adaptive neuro-fuzzy inference

system. Environmental Engineering Research, 25(4), pp.545-553.

74. Ashrafi, M., Chua, L.H. and Quek, C., 2019. The applicability of Generic Self-Evolving

Takagi-Sugeno-Kang neuro-fuzzy model in modeling rainfall–runoff and river

routing. Hydrology Research, 50(4), pp.991-1001.

75. Guzman et.al, 2019. Evaluation of Seasonally Classified Inputs for the Prediction of

Daily Groundwater Levels: NARX Networks Vs Support Vector

Machines. Environmental Modeling & Assessment, 24(2), pp.223-234.

76. Vignesh et.al, 2019. Spatial rainfall variability in peninsular India: a nonlinear dynamic

approach. Stochastic Environmental Research and Risk Assessment, 33(2), pp.465-

480.

77. Atangana, Abdon, and J. F. Gómez-Aguilar. "Decolonisation of fractional calculus rules:

Breaking commutativity and associativity to capture more natural phenomena." The

European Physical Journal Plus 133, no. 4 (2018): 166.

78. Arqub et.al, 2016. Numerical solutions of fuzzy differential equations using reproducing

kernel Hilbert space method. Soft Computing, 20(8), pp.3283-3302.

79. Al-Smad et.al, . Application of reproducing kernel algorithm for solving second-order,

two-point fuzzy boundary value problems. Soft Computing, 21(23), pp.7191-7206.

Page 174: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

187

80. Arqub, O.A., 2017, Adaptation of reproducing kernel algorithm for solving fuzzy

Fredholm–Volterra integrodifferential equations. Neural Computing and

Applications, 28(7), pp.1591-1610.

81. Atangana, A. and Baleanu, D., 2016. New fractional derivatives with nonlocal and non-

singular kernel: theory and application to heat transfer model. arXiv preprint

arXiv:1602.03408.:

82. Fang et.al, 2019. Natural disasters, climate change, and their impact on inclusive

wealth in G20 countries. Environmental Science and Pollution Research, 26(2),

pp.1455-1463.

83. Rehman et.al, 2019. Applying systems thinking to flood disaster management for a

sustainable development. International Journal of Disaster Risk Reduction, 36,

p.101101.

84. Mahessar et.al, 2019. Flash Flood Climatology in the Lower Region of Southern

Sindh. Engineering, Technology & Applied Science Research, 9(4), pp.4474-4479.

85. Bano et.al, 2019. Spatial and temporal changes in salinity of arable lands in Shah

Bandar Tehsil, Thatta District, Sindh. International Journal of Economic and

Environmental Geology, pp.37-45.

86. Kidwai et.al, 2019. The Indus Delta—Catchment, River, Coast, and People. In Coasts

and Estuaries (pp. 213-232). Elsevier.

87. Changnon, S., 2019. The great flood of 1993: Causes, impacts, and responses.

Routledge.\

88. Ref No. CDP-7(4)/3/B/2015Meteorological Complex Director, C.D.P.C. PMD, Karach,

Gulistan-E-Jouhar University Road, Karachi Pakistan Meteorological

Department (PMD),Karachi Pakistan.

Page 175: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

188

89. Phylaktis, K. and Ravazzolo, F., 2005. Stock prices and exchange rate

dynamics. Journal of international Money and Finance, 24(7), pp.1031-1053.

90. Lane, P.R. and Shambaugh, J.C., 2010. Financial exchange rates and international

currency exposures. American Economic Review, 100(1), pp.518-40

91. López Martín, M., 2019. Novel applications of Machine Learning to Network Traffic

Analysis and Prediction.

92. Pawar et.al, 2019. Stock Market Price Prediction Using LSTM RNN. In Emerging

Trends in Expert Applications and Security (pp. 493-503). Springer, Singapore.

93. Ahmed et.al, 1979. Analysis of freeway traffic time-series data by using Box-Jenkins

techniques (No. 722)

94. Atsalakis et.al, 2009. Surveying stock market forecasting techniques–Part II: Soft

computing methods. Expert Systems with Applications, 36(3), pp.5932-5941.

95. Franses et.al, 1996. Forecasting stock market volatility using (non‐linear) Garch

model. Journal of Forecasting, 15(3), pp.229-235.

96. Deisenroth, M.P., Rasmussen, C.E. and Peters, J., 2009. Gaussian process dynamic

programming. Neurocomputing, 72(7-9), pp.1508-1524.

97. Khedr, A.E. and Yaseen, N., 2017. Predicting stock market behavior using data mining

technique and news sentiment analysis. International Journal of Intelligent Systems and

Applications, 9(7), p.22.

98. Dong et.al, 2019. A simple approach to multivariate monitoring of production processes

with non-Gaussian data. Journal of Manufacturing Systems, 53, pp.291-304.

99. Liu et.al, 2017. An Evaluation of ARFIMA (Autoregressive Fractional Integral Moving

Average) Programs. axioms, 6(2), p.16.

Page 176: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

189

100. Sheng, H. and Chen, Y., 2011. FARIMA with stable innovations model of Great

Salt Lake elevation time series. Signal Processing, 91(3), pp.553-561.

101. Diebold, F.X. and Rudebusch, G.D., 1989. Long memory and persistence in

aggregate output. Journal of monetary economics, 24(2), pp.189-209.

102. Gao, T., Chai, Y. and Liu, Y., 2017, November. Applying long short term

memory neural networks for predicting stock closing price. In 2017 8th IEEE

International Conference on Software Engineering and Service Science (ICSESS) (pp.

575-578). IEEE

103. Lieberman, O. and Phillips, P.C., 2008. Refined inference on long memory in

realized volatility. Econometric reviews, 27(1-3), pp.254-267.

104. Chen, K., Zhou, Y. and Dai, F., 2015, October. A LSTM-based method for

stock returns prediction: A case study of China stock market. In 2015 IEEE International

Conference on Big Data (Big Data) (pp. 2823-3024). IEEE.

105. Siami.Namini, S. and Namin, A.S., 2018. Forecasting economics and financial

time series: Arima vs. lstm. arXiv preprint arXiv:1803.06386.

106. Khare et.al, 2017, May. Short term stock price prediction using deep learning.

In 2017 2nd IEEE International Conference on Recent Trends in Electronics,

Information & Communication Technology (RTEICT) (pp. 482-486). IEEE.

107. Choi, H.K., 2018. Stock price correlation coefficient prediction with ARIMA-

LSTM hybrid model. arXiv preprint arXiv:1808.01560.

108. Fang, W., 2002. The effects of currency depreciation on stock returns:

Evidence from five East Asian economies. Applied Economics Letters, 9(3), pp.195-

199.

Page 177: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

190

109. Debnath, L., 2004. A brief historical introduction to fractional

calculus. International Journal of Mathematical Education in Science and

Technology, 35(4), pp.487-501.

110. Song, L., 2018. A Semianalytical Solution of the Fractional Derivative Model

and Its Application in Financial Market. Complexity, 2018.

111. AboBakr et.al, 2017. Experimental comparison of integer/fractional-order

electrical models of plant. AEU-International Journal of Electronics and

Communications, 80, pp.1-9.

112. Kumar, M. and Rawat, T.K., 2016. Fractional order digital differentiator design

based on power function and least squares. International Journal of

Electronics, 103(10), pp.1639-1653.

113. Samko, S.G., Kilbas, A.A. and Marichev, O.I., 1993. Fractional integrals and

derivatives (Vol. 1993). Yverdon-les-Bains, Switzerland: Gordon and Breach Science

Publishers, Yverdon.

114. Caputo, M. and Fabrizio, M., 2015. A new definition of fractional derivative

without singular kernel. Progr. Fract. Differ. Appl, 1(2), pp.1-13.

115. Almeida, R., Tavares, D. and Torres, D.F., 2019. Fractional Calculus. In The

Variable-Order Fractional Calculus of Variations (pp. 1-19). Springer, Cham.

116. Baleanu, D. and Muslih, S.I., 2005, January. About Lagrangian formulation of

classical fields within Riemann-Liouville fractional derivatives. In International Design

Engineering Technical Conferences and Computers and Information in Engineering

Conference (Vol. 47438, pp. 1457-1464).

117. Graves et.al, 2014. A brief history of long memory: Hurst, Mandelbrot and the

road to ARFIMA. arXiv preprint arXiv:1406.6018.

Page 178: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

191

118. Campos et.al, 2019. Dynamic Hurst Exponent in Time Series. arXiv preprint

arXiv:1903.07809.

119. Hsieh et.al, 2019. Modeling leverage and long memory in volatility in a

pure‐jump process. High Frequency.

120. Beran et.al, 2016. Long-Memory Processes. SPRINGER-VERLAG BERLIN

AN.

121. Shittu, O.I. and Yaya, O.S., 2011. On fractionally integrated logistic smooth

transitions in time series. CBN Journal of Applied Statistics, 2(1), pp.1-13.

122. Shi, C. and Zhuang, X., 2019. A Study Concerning Soft Computing

Approaches for Stock Price Forecasting. Axioms, 8(4), p.116.

123. Kavasseri, R.G. and Seetharaman, K., 2009. Day-ahead wind speed

forecasting using f-ARIMA models. Renewable Energy, 34(5), pp.1388-1393.

124. Nourikhah et.al 2015. Modeling and predicting measured response time of

cloud-based web services using long-memory time series. The Journal of

Supercomputing, 71(2), pp.673-696.

125. Caporale et.al. 2019. Long-term price overreactions: are markets

inefficient?. Journal of Economics and Finance, 43(4), pp.657-680.

126. Kumarasinghe et.al, 2019. An Intelligent Predicting Approach Based Long

Short-Term Memory Model Using Numerical and Textual Data: The Case of Colombo

Stock Exchange.

127. Zhang, J., Bargal, S.A., Lin, Z., Brandt, J., Shen, X. and Sclaroff, S., 2018. Top-

down neural attention by excitation backprop. International Journal of Computer

Vision, 126(10), pp.1084-1102.

Page 179: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

192

128. Urooj Fatima| Assistant Manager Marketing &Business Development Dept

Program Manager | PSX Stock. Islamabad.

129. López-Herrera et.al, 2012. Long memory behavior in the returns of the Mexican

stock market: Arfima models and value at risk estimation. International Journal of

Academic Research in Business and Social Sciences, 2(10), p.113.

130. Kumarasinghe et.al, 2019. An Intelligent Predicting Approach Based Long

Short-Term Memory Model Using Numerical and Textual Data: The Case of Colombo

Stock Exchange.

131. Gers, F.A., Eck, D. and Schmidhuber, J., 2002. Applying LSTM to time series

predictable through time-window approaches. In Neural Nets WIRN Vietri-01 (pp. 193-

200). Springer, London.

132. Hong, W.C., 2020. Modeling for Energy Demand Forecasting. In Hybrid

Intelligent Technologies in Energy Demand Forecasting (pp. 25-44). Springer, Cham.

133. Lu, X., He, P. and Xu, J., 2019. Error compensation-based time-space

separation modeling method for complex distributed parameter processes. Journal of

Process Control, 80, pp.117-126.

134. Jafari, R., Yu, W., Razvarz, S. and Gegov, A., 2019. Numerical methods for

solving fuzzy equations: A survey. Fuzzy Sets and Systems.

135. Raissi, M., Perdikaris, P. and Karniadakis, G.E., 2017. Physics informed deep

learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv

preprint arXiv:1711.10561.

136. Niska, H., Hiltunen, T., Karppinen, A., Ruuskanen, J. and Kolehmainen, M.,

2004. Evolving the neural network model for forecasting air pollution time

series. Engineering Applications of Artificial Intelligence, 17(2), pp.159-167.

Page 180: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

193

137. Salehizadeh et.al, 2009, March. Local optima avoidable particle swarm

optimization. In 2009 IEEE Swarm Intelligence Symposium (pp. 16-21). IEEE.

138. Lim, Y.I., Le Lann, J.M. and Joulia, X., 2001. Accuracy, temporal performance

and stability comparisons of discretization methods for the numerical solution of Partial

Differential Equations (PDEs) in the presence of steep moving fronts. Computers &

Chemical Engineering, 25(11-12), pp.1483-1492.

139. Vasil'eva et.al, 1995. The boundary function method for singular perturbation

problems. Society for Industrial and Applied Mathematics.

140. Yuhas et.al 1989. Integration of acoustic and visual speech signals using

neural networks. IEEE Communications Magazine, 27(11), pp.65-71.

141. D'mello, S.K. and Kory, J., 2015. A review and meta-analysis of multimodal

affect detection systems. ACM Computing Surveys (CSUR), 47(3), pp.1-36.

142. Bernardi et.al, 2016. Automatic description generation from images: A survey

of models, datasets, and evaluation measures. Journal of Artificial Intelligence

Research, 55, pp.409-442.

143. Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H. and Ng, A.Y., 2011, January.

Multimodal deep learning. In ICML.

144. Kalchbrenner, N. and Blunsom, P., 2013, October. Recurrent continuous

translation models. In Proceedings of the 2013 Conference on Empirical Methods in

Natural Language Processing (pp. 1700-1709).

145. Rajendran et.al, 2015. Bridge correlational neural networks for multilingual

multimodal representation learning. arXiv preprint arXiv:1510.03519.

Page 181: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

194

146. R. L. Merlino and J. A. Goree, “Dusty plasmas in the laboratory, industry, and

space,” Physics Today, vol. 57, no. 7, pp. 32-39, 2004.

147. Fortov et.al, 2005. Complex (dusty) plasmas: Current status, open issues,

perspectives. Physics reports, 421(1-2), pp.1-103.

148. Song et.al, 2019. Modeling Air Pollution Transmission Behavior as Complex

Network and Mining Key Monitoring Station. IEEE Access, 7, pp.121245-121254.

149. Nawaz, S.J., Sharma, S.K., Wyne, S., Patwary, M.N. and Asaduzzaman, M.,

2019. Quantum machine learning for 6G communication networks: State-of-the-art and

vision for the future. IEEE Access, 7, pp.46317-46350.

150. Cai, D., Yu, Y. and Wei, J., 2018. A modified artificial bee colony algorithm for

parameter estimation of fractional-order nonlinear systems. IEEE Access, 6, pp.48600-

48610.

151. Li et.al, 2019. A special points-based hybrid prediction strategy for dynamic

multi-objective optimization. IEEE Access, 7, pp.62496-62510..

152. Dziekonski et.al, 2018. Preconditioners with Low Memory Requirements for

Higher-Order Finite-Element Method Applied to Solving Maxwell’s Equations on

Multicore CPUs and GPUs. IEEE Access, 6, pp.53072-53079.

153. Bo et.al, 2019, May. Research of Typical Line Loss Rate in Transformer

District Based on Data-Driven Method. In 2019 IEEE Innovative Smart Grid

Technologies-Asia (ISGT Asia) (pp. 786-791). IEEE.

154. Hannan et.al 2010. Generalized regression neural network and radial basis

function for heart disease diagnosis. International Journal of Computer

Applications, 7(13), pp.7-13.

Page 182: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

195

155. Vukovic et.al, 2020. Neural network forecasting in prediction Sharpe ratio:

Evidence from EU debt market. Physica A: Statistical Mechanics and its

Applications, 542, p.123331.

156. Tsoulos et.al, 2019. NNC: A tool based on Grammatical Evolution for data

classification and differential equation solving. SoftwareX, 10, p.100297.

157. Lagaris et.al, 1998. Artificial neural networks for solving ordinary and partial

differential equations. IEEE transactions on neural networks, 9(5), pp.987-1000.

158. Mall, S. and Chakraverty, S., 2013. Comparison of artificial neural network

architecture in solving ordinary differential equations. Advances in Artificial Neural

Systems, 2013.

159. Malek, A. and Beidokhti, R.S., 2006. Numerical solution for high order

differential equations using a hybrid neural network—optimization method. Applied

Mathematics and Computation, 183(1), pp.260-271.

160. Lee, C., Kim, J., Babcock, D. and Goodman, R., 1997. Application of neural

networks to turbulence control for drag reduction. Physics of Fluids, 9(6), pp.1740-

1747.

161. Yuan, M. and Lin, Y., 2006. Model selection and estimation in regression with

grouped variables. Journal of the Royal Statistical Society: Series B (Statistical

Methodology), 68(1), pp.49-67.

162. Sakthivel et.al, 2010. Application of support vector machine (SVM) and

proximal support vector machine (PSVM) for fault classification of monoblock

centrifugal pump. International Journal of Data Analysis Techniques and

Strategies, 2(1), pp.38-61.

Page 183: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

196

163. Mall, S. and Chakraverty, S., 2013. Comparison of artificial neural network

architecture in solving ordinary differential equations. Advances in Artificial Neural

Systems, 2013.

164. Yadav, R., Sharma, S.K. and Tarhini, A., 2016. A multi-analytical approach to

understand and predict the mobile commerce adoption. Journal of enterprise

information management.

165. Tsoulos et.al, 2019. NNC: A tool based on Grammatical Evolution for data

classification and differential equation solving. SoftwareX, 10, p.100297.

166. Yang et.al, 2020. Neural network algorithm based on Legendre improved

extreme learning machine for solving elliptic partial differential equations. Soft

Computing, 24(2), pp.1083-1096.

167. Skala, V., 2011. Incremental radial basis function computation for neural

networks. WSEAS Transactions on Computers, 10(11), pp.367-378.

168. Mai-Duy, N. and Tran-Cong, T., 2001. Numerical solution of differential

equations using multiquadric radial basis function networks. Neural networks, 14(2),

pp.185-199.

169. Atthajariyakul, S. and Leephakpreeda, T., 2006. Fluidized bed paddy drying in

optimal conditions via adaptive fuzzy logic control. Journal of food engineering, 75(1),

pp.104-114.

170. Motsa, S.S. and Sibanda, P., 2012. A note on the solutions of the Van der Pol

and Duffing equations using a linearisation method. Mathematical Problems in

Engineering, 2012.

171. He, J.H., 2006. Some asymptotic methods for strongly nonlinear

equations. International journal of Modern physics B, 20(10), pp.1141-1199.

Page 184: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

197

172. Nourazar, S. and Mirzabeigy, A., 2013. Approximate solution for nonlinear

Duffing oscillator with damping effect using the modified differential transform

method. Scientia Iranica, 20(2), pp.364-368.

173. Ibsen, L.B., Barari, A. and Kimiaeifar, A., 2010. Analysis of highly nonlinear

oscillation systems using He’s max-min method and comparison with homotopy

analysis and energy balance methods. Sadhana, 35(4), pp.433-448.

174. Njah, A.N. and Vincent, U.E., 2008. Chaos synchronization between single and

double wells Duffing–Van der Pol oscillators using active control. Chaos, Solitons &

Fractals, 37(5), pp.1356-1361.

175. Yu, S.Z., 2010. Hidden semi-Markov models. Artificial intelligence, 174(2),

pp.215-243.

176. Hu, K. and Chung, K.W., 2013. On the stability analysis of a pair of van der Pol

oscillators with delayed self-connection, position and velocity couplings. Aip

Advances, 3(11), p.112118.

177. Raja et.al, 2018. Intelligent computing for Mathieu’s systems for parameter

excitation, vertically driven pendulum and dusty plasma models. Applied Soft

Computing, 62, pp.359-372.

178. Hannan et.al, 2010. Generalized regression neural network and radial basis

function for heart disease diagnosis. International Journal of Computer

Applications, 7(13), pp.7-13.

179. Raj, J.S. and Ananthi, J.V., 2019. Recurrent neural networks and nonlinear

prediction in support vector machines. Journal of Soft Computing Paradigm

(JSCP), 1(01), pp.33-40.

Page 185: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

198

180. Wedge, D.C., Ingram, D.M., McLean, D.A., Mingham, C.G. and Bandar, Z.A.,

2006. On global-local artificial neural networks for function approximation.

181. Krakhovskaya, N. and Astakhov, S., 2018, October. Forced Synchronization of

Central Pattern Generator of the Van Der Pol Oscillator With an Additional Feedback

Loop. In 2018 2nd School on Dynamics of Complex Networks and their Application in

Intellectual Robotics (DCNAIR) (pp. 72-74). IEEE.

182. Domínguez-Morales et.al, 2019. Bio-Inspired Stereo Vision Calibration for

Dynamic Vision Sensors. IEEE Access, 7, pp.138415-138425.

183. Bansal et.al, 2018. Analysing convergence, consistency, and trajectory of

artificial bee colony algorithm. IEEE Access, 6, pp.73593-73602.

184. El-Dib, Y., 2018. Stability analysis of a strongly displacement time-delayed

Duffing oscillator using multiple scales homotopy perturbation method. Journal of

Applied and Computational Mechanics, 4(4), pp.260-274.

185. Wang et.al, 2018. Disparity estimation for camera arrays using reliability

guided disparity propagation. IEEE Access, 6, pp.21840-21849

186. Lu et.al, 2018. A novel approach for video text detection and recognition

based on a corner response feature map and transferred deep convolutional neural

network. IEEE Access, 6, pp.40198-40211.

187. Ahmad, I., Raja, M.A.Z., Bilal, M. and Ashraf, F., 2017. Neural network

methods to solve the Lane–Emden type equations arising in thermodynamic studies of

the spherical gas cloud model. Neural Computing and Applications, 28(1), pp.929-944.

188. Sabir et.al, 2018. Neuro-heuristics for nonlinear singular Thomas-Fermi

systems. Applied Soft Computing, 65, pp.152-169.

Page 186: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

199

189. Ahmad et.al, 2018. Neuro-evolutionary computing paradigm for Painlevé

equation-II in nonlinear optics. The European Physical Journal Plus, 133(5), p.184.

190. Raja et.al 2018. A new stochastic computing paradigm for nonlinear Painlevé II

systems in applications of random matrix theory. The European Physical Journal

Plus, 133(7), p.254.

191. Jamal, R., Men, B., Khan, N.H. and Raja, M.A.Z., 2019. Hybrid Bio-Inspired

Computational Heuristic Paradigm for Integrated Load Dispatch Problems Involving

Stochastic Wind. Energies, 12(13), p.2568.

192. Raja et2018. A new stochastic computing paradigm for the dynamics of

nonlinear singular heat conduction model of the human head. The European Physical

Journal Plus, 133(9), p.364.et

193. Raja et.a;, 2018. Bio-inspired computational heuristics to study models of HIV

infection of CD4+ T-cell. International Journal of Biomathematics, 11(02), p.1850019.

194. Raja, M.A.Z., Shah, F.H. and Syam, M.I., 2018. Intelligent computing approach

to solve the nonlinear Van der Pol system for heartbeat model. Neural Computing and

Applications, 30(12), pp.3651-3675.

195. Khan et.al, 2018. Backtracking search integrated with sequential quadratic

programming for nonlinear active noise control systems. Applied Soft Computing, 73,

pp.666-683

196. . Mehmood et.al, 2018. Parameter estimation for Hammerstein control

autoregressive systems using differential evolution. Signal, Image and Video

Processing, 12(8), pp.1603-1610

Page 187: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

200

197. Chaudhary et.a;. Backtracking search optimization heuristics for nonlinear

Hammerstein controlled auto regressive auto regressive systems. ISA transactions, 91,

pp.99-113.

Page 188: DESIGN OF META-HEURISTIC COMPUTING PARADIGM FOR …

201

PUBLICATION