hybrid svr-acnn model: proposed video super-resolution...

17
Journal of Digital Information Management Volume 17 Number 2 April 2019 87 Hybrid SVR-ACNN Model: Proposed Video Super-Resolution Method for Video Enhancement 1 Padma Reddy A. M, 2 Udayarani 1 Sai Vidya Institute of Technology, Rajanukunte Via Yelahanka, Bengaluru, Karnataka 560064, India [email protected] 2 Reva University Kattigenahalli, Bengaluru Karnataka 560064 India Journal of Digital Information Management ABSTRACT: Video super-resolution techniques are the need of the hour for the high-resolution display devices as the current high-resolution videos are the basic question. Even though there are a large number of techniques employed for the video super-resolution, all these existing techniques face a hectic challenge at various conditions. Thus, this paper proposes an effective video resolution strategy using the hybrid Support vector regression- Actor Critic Neural Network (SVR-ACNN) model for video enhancement. The super-resolution images formed using the individual SVR model and Actor Critic Neural Network are integrated using the weighted average concept. The Actor Critic Neural Network is tuned optimally using the proposed Fractional-based Sine Cosine algorithm (F-SCA) that is responsible for the global optimal convergence. The experimentation of the proposed method utilizes three videos taken from the Cambridge- driving Labeled Video Database (CamVid), and the results are analyzed for three scaling factors. The outcome of the analysis proves that the proposed method offers a better super-resolution image with a better PSNR, SSIM, and SDME of 33.6447dB, 0.9398, and 45.2779, respectively. Subject Categories and Descriptors [I.2.10 Vision and Scene Understanding]; Video: [H.4.3 Communications Applications]; Videotex: [H.5.1 Multimedia Information Systems]; Video General Terms Super-resolution, Video, Support Vector Regression, Neural Networks Keywords: Video Super-resolution, SCA, Support Vector Regression, Video Enhancement, Fractional Theory Received: 19 September 2018, Revised 5 December 2018, Accepted 14 December 2018 Review Metrics: Review Score 4.3/6, Revise Scale: 0-6, Inter- reviewer Consistency: 82% DOI: 10.6025/jdim/2019/17/2/87-103 1. Introduction Digital video plays a major role in the day-to-day life, and videos [33] and images with High resolution are mainly employed as they probably minimize the computation cost required for display, processing, and analysis. For instance, one can say that the need for displaying the videos with Low Resolution (LR) videos namely, the Standard-Definition (SD) video signals on High Definition (HD) displays with greater quality. Most of the resolution issues present in the Surveillance videos are that they lose their resolution such that the required frame rate for the videos has to be guaranteed for dynamic scenes [10]. The issues of the resolution can be rectified using the suitable video enhancement strategies, and the main aim of the video enhancement is to unveil the information of the video that is found hid [13]. Video enhancement is approached as the formulation of the high resolution and high-quality video out of the low resolution video as per the requirements of the specific applications. Thus, video enhancement aims at improving the clarity of the input

Upload: others

Post on 14-Oct-2019

5 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Digital Information Management Volume 17 Number 2 April 2019 87

Hybrid SVR-ACNN Model: Proposed Video Super-Resolution Method forVideo Enhancement

1Padma Reddy A. M, 2Udayarani1Sai Vidya Institute of Technology, RajanukunteVia Yelahanka, Bengaluru, Karnataka 560064, [email protected] UniversityKattigenahalli, BengaluruKarnataka 560064India

Journal of DigitalInformation Management

ABSTRACT: Video super-resolution techniques are theneed of the hour for the high-resolution display devicesas the current high-resolution videos are the basicquestion. Even though there are a large number oftechniques employed for the video super-resolution, allthese existing techniques face a hectic challenge atvarious conditions. Thus, this paper proposes an effectivevideo resolution strategy using the hybrid Support vectorregression- Actor Critic Neural Network (SVR-ACNN)model for video enhancement. The super-resolutionimages formed using the individual SVR model and ActorCritic Neural Network are integrated using the weightedaverage concept. The Actor Critic Neural Network is tunedoptimally using the proposed Fractional-based Sine Cosinealgorithm (F-SCA) that is responsible for the globaloptimal convergence. The experimentation of the proposedmethod utilizes three videos taken from the Cambridge-driving Labeled Video Database (CamVid), and the resultsare analyzed for three scaling factors. The outcome ofthe analysis proves that the proposed method offers abetter super-resolution image with a better PSNR, SSIM,and SDME of 33.6447dB, 0.9398, and 45.2779,respectively.

Subject Categories and Descriptors[I.2.10 Vision and Scene Understanding]; Video: [H.4.3Communications Applications]; Videotex: [H.5.1 MultimediaInformation Systems]; Video

General TermsSuper-resolution, Video, Support Vector Regression, NeuralNetworks

Keywords: Video Super-resolution, SCA, Support VectorRegression, Video Enhancement, Fractional Theory

Received: 19 September 2018, Revised 5 December 2018,Accepted 14 December 2018

Review Metrics: Review Score 4.3/6, Revise Scale: 0-6, Inter-reviewer Consistency: 82%

DOI: 10.6025/jdim/2019/17/2/87-103

1. Introduction

Digital video plays a major role in the day-to-day life, andvideos [33] and images with High resolution are mainlyemployed as they probably minimize the computation costrequired for display, processing, and analysis. Forinstance, one can say that the need for displaying thevideos with Low Resolution (LR) videos namely, theStandard-Definition (SD) video signals on High Definition(HD) displays with greater quality. Most of the resolutionissues present in the Surveillance videos are that theylose their resolution such that the required frame rate forthe videos has to be guaranteed for dynamic scenes [10].The issues of the resolution can be rectified using thesuitable video enhancement strategies, and the main aimof the video enhancement is to unveil the information ofthe video that is found hid [13]. Video enhancement isapproached as the formulation of the high resolution andhigh-quality video out of the low resolution video as perthe requirements of the specific applications. Thus, videoenhancement aims at improving the clarity of the input

88 Journal of Digital Information Management Volume 17 Number 2 April 2019

video as most of the applications use the digital video forprocessing, verification of traffics, criminal issues, and soon that is required being with good clarity for analysis [2].

Video enhancement is performed based on the spatialdomain and frequency domain. In the spatial domain, thepixels are manipulated directly in the space plain, and inthe frequency domain, the spatial frequency spectrum ofthe image is modified [11]. Even though videoenhancement derives high quality videos, there are somedegradation factors affecting the quality enhancementprocess. The problems arrive due to the low contrast suchthat extraction of the object from the dark background isan issue. Moreover, the problem may be due to theexpertise of the human operators and poor quality of thevideo [12]. The most common method employed forovercoming the risk factors of dealing with the surveillancevideos includes the video Super-Resolution (SR) [1]. Thevideo-based super-resolution aims at the generation ofthe high-resolution frames by complementing the detailsof the image pixels [2] [5]. The aim of the video SR is togenerate the High-Resolution (HR) video frames using thesequence of Low-Resolution (LR) inputs. Video super-resolution drags a lot of attention in the last few yearsboth on the academic side and on the industry [10] [11].The available HR video devices are highly expensive togenerate, store and transmit the HR videos and hence,the demand falls on the modern SR techniques such thatthe generation of the HR videos from LR ones is facilitated[3].

Video SR techniques enable us to perform the videocoding/decoding [14], face video hallucination [15], videosurveillance systems [16], remote sensing systems [17],medical image analysis [18] and stereoscopic videoprocessing [19] [11] [4]. SR techniques are grouped intotwo main categories, such as single-image based andmulti-frame based [20], [21]. Single-image based SRmethods use interpolation based and example-basedmethods. Interpolation based methods exhibit lowcomputational cost, but with limited restorationperformance. Bayes theorem [34] and linear regression[39] have been utilized in video super-resolution.Researchers pay their interest on the example-basedsingle-image SR, in which the external and/or internalexemplars are sheathed for learning the mappings usingthe low-resolution patch to yield the high-resolutionpatches. Example-based SR algorithms use nearestneighbor [22], neighbor embedding [23], sparserepresentation [24], anchored neighbor regression [25] ordeep learning [26] [4]. A lot of existing multi-frame SRmethods that model the long-term and short-termdependencies through the extraction of the subpixelmotions in the video frames are developed in the literature.The fact of modeling is proven to be effective using theRecurrent Neural Networks (RNNs) [27] as they possess ahigher degree of temporal dependency modeling in sequentialdata processing [7]. Video SR techniques can be used inconjunction with other image processing techniques, likemedical image analysis [35, 36, 37, and 38].

This paper uses an effective video enhancement strategyusing the proposed SVR-ACNN model. The LR video istransformed as the SR video and the model processes onthe individual frames of the video. The quality of the videois enhanced, for which initially, the individual frames aresent to the SVR model and Actor Critic Neural Network(ACNN) individually and the obtained super-resolutionimage is subjected to the weighted average concept basedon the weights. The obtained super-resolution imagesustains the video quality and highlights the higher degreeof contrast to make it effective for further video processing.ACNN is tuned using the proposed algorithm that is theintegration of the fractional concept in the standard SineCosine Algorithm (SCA). The proposed F-SCA inheritsthe advantages of both the fractional theory and SCA thatpossesses better convergence and provides the globaloptimal weights to tune the ACNN.

The major contributions of the paper are:

F-SCA algorithm for tuning the optimal weights ofACNN: The weights of ACNN are tuned optimally usingthe proposed algorithm that is the integration of thefractional concept in the SCA. The global optimal weightsare derived for ACNN that derives the superresolution video.

Hybrid SVR-ACNN model: The proposed hybrid SVR-ACNN model generates the superresolution image.Initially, the low resolution image is subjected to the highresolution individually using two models, SVR and ACNN,and finally, the high resolution image is formed using theweighted average.

The organization of the paper is:

Section 1 introduces the paper; section 2 elaborates theliterature review of the superresolution images. Section 3states the problem and the proposed method of super-resolution is discussed in section 4, section 5 details theresults and discussion of the proposed method. Finally,section 6 concludes the paper.

2. Motivation

This section depicts a review of the literature on variousexisting video super-resolution methods. These researchpapers are taken and reviewed according to the recentlypublished years based on the video super-resolutiontechniques.

2.1 Related WorksArmin Kappeler et al. [1] presented Convolutional NeuralNetworks (CNNs) that used the dimension of videos forincreasing the spatial resolution. Consecutive frames weremotion compensated and used as input to the CNN. KunLi et al. [2] developed a method that automatically selectedand obtained a super-resolved image. The results werebased on the spatial-temporal characters, but the methodwas not effective. Wenhan Yang et al. [3] suggestedSpatial-Temporal Recurrent Residual Networks (STR-

Journal of Digital Information Management Volume 17 Number 2 April 2019 89

ResNet) for video superresolution. This networksimultaneously models high-frequency details of singleframes, the differences between HR and LR frames, aswell as the changes of these adjacent detail frames. DingyiLi and Zengfu Wang [4] designed a video SR algorithmthat was able to handle large and complex motionsadaptively. Amar B. Deshmukh and N. Usha Rani [5]designed a model that alleviated the resolution issues,and the designed model is termed as fractional-Grey Wolfoptimizer-based kernel weighted regression model. Themerit of the method is that there is no degradation in thequality of the image. The method provides a super-resolution image without degrading the quality of the LRimage, but the performance is poor in case of greatervalue of Second Derivative like Measure of Enhancement(SDME). Yawei Li et al. [6] designed an adaptive factorthat was integrated in the Non-Local Means (NLM)algorithms that overcame the drawback of fixed decayingfactor and searching window. The robustness of the methodwas better, but it was dependent on the pixel-wisecomputation. Yan Huang et al. [7] designed a model,bidirectional recurrent convolutional network, for performingthe multi-frame SR. The method offered low computationalcomplexity and performed the order of magnitude that wasfaster when compared with other multi-frame SR methods,but this method was not applicable for large-scale high-resolution video. Di Chen et al. [8] modeled a methodthrough the integration of the Compensationbased TV(CTV) regularization term with Multi-Non-local Low-Rank(MNLR) regularization term in the optimization algorithm.The algorithm minimized the negative impacts andpossessed the capacity to withstand the noise, but themethod suffered from noise effects while using a largedatabase.

2.2 Challenges• The process of generating the super-resolution framesmay yield high-quality results, but the methods fail totake advantage of significant correlations betweenadjacent frames. Moreover, these methods possess highcomputational demands in case of a large number offrames [2].

• The performance of the Example-based super-resolutionmethods [4] is found to be better, but their results may beimpossible and as they depend on the qualities of thetraining datasets.

• The super-resolution based on CNN model [1] suffersfrom high computational cost as they are time-consumingas well as it suffers from visual artifacts that occur due tothe complex motions available in the video frames.

• Self-enhancement techniques [12] are straightforwardfor the implementation, but they suffered with the darkoriginal videos as they lose the information at the time ofpre-processing. The learning-based or example-basedmethods [2] possess highly complex computations asthe result of the dictionary training and patch matching.

3. Problem Statement

The ultimate goal of the paper is to obtain the SR videousing the LR videos and the conversion of the LR videosinto the SR videos is based on the scaling factor suchthat the pixels in the image are modified without affectingthe quality of the image. Consider an LR video V with nnumber of frames,

V = {f1, f2,....fi,....fn} (1)

where, V implies the LR video and fi indicates the ith frame.The pixel location of the ith frame centered at the location,(g, h) is denoted as, fi (g, h).The dimension of the ith framecentered at (g, h) is given as, (x × y) and the scaling factoris denoted as, s. The LR image is converted to the HRimage based on the scaling factor that is set by the user,and the HR image is derived using the SVR and ACNNmodel. The result from the two models is subjected to theweighted average to form the super-resolution image ofthe ith frame that is given as,

S = {S1, S2,....Si,....Sn} (2)

The pixel location of the ith frame is denoted as (g, h), andthe dimension of the ith super resolute frame is given as,(s × x, s × y).The main intention of the paper is to transformthe low resolution image of dimension (x, y) into the highresolution image of dimension, (a × b) ∈ [(s × x) × (s × y)].

4. The Proposed Method of forming the Super-Resolution image using the Proposed SVR-ACNNModel

The aim of the proposed hybrid model is to form the super-resolution image, for which the hybrid model is developedusing the SVR model and ACNN. Initially, the low resolutionimage is converted to the super-resolution image usingthe Support vector regression model and ACNN. The ACNNmodel is trained using the proposed F-SCA algorithm thatis the integration of SCA and Fractional concept. Theproposed algorithm tunes the network adaptively togenerate the super-resolution image. The super-resolutionimage formed using the SVR model, and ACNN isaveraged based on the weighted average to form the finaloutput of the super-resolution image. The architecture ofthe proposed method of forming the super-resolution imageis depicted in figure 1.

4.1 Super-Resolution Image using SupportRegression ModelThe LR image or in other words, the ith frame fi of thevideo V is subjected to the SVR model and the first stepin the SVR is the establishment of the kernel regressioncoefficient. The kernel regression coefficient depends notonly on the location and density but also on the shape.The shape of the kernel regression is described as square,and its size is based on the user. Therefore, the kernelsize is based on the scaling factor that is set by the user.

90 Journal of Digital Information Management Volume 17 Number 2 April 2019

Figure 1. The architecture of developing the super-resolution image using the proposed hybrid model

The kernel regression coefficient is organized in the matrixformat that is computed based on the distance value withrespect to the center pixel. The kernel regressioncoefficient is denoted as,

C = {Kpq; 0 ≤ p; q ≤ r (3)

The distance-based integer matrix is based on the kernelcoefficient, and it is determined using the regression model[31].

4.1.1 Generation of the Super-resolution ImageThe second step is the formation of the SR image that isobtained through the interpolation of the kernel regressionmatrix with the LR image. The pixel values of the newlygenerated SR image are generated using the supportvector regression, and the dimension of the SR image isbased on the scaling factor. The unknown pixel values ofthe SR image are calculated based on the neighboringpixels of the LR image. The SR image obtained using theSVR is denoted as, f1

i (g, h). The pixel of the SR image isdetermined as,

f1i (g, h) = 1

g × h × Σ Σ f i (g, h) × R(g, h) (4)

g h

c1=1 c2=1

where, c1 and c2 are the rows and columns of the sub-image or the ith frame, (g × h) represents the dimensionof the frame, and R(g, h) denotes the kernel regressionmatrix. f

i (g, h) is the ith frame centered at (g, h). Thekernel regression matrix is the dot product of the arbitrarythat may or may not be known. The arbitraries refer to thepixels, and the unknown pixels are based on the similaritymetric that is determined based on the neighboring pixels[31]. The dimension (g × h) ∈ [(s × x) × (s × y)] is thedimension of the SR image.

4.2 Generation of the Super-resolution Image usingthe Proposed F-SCA-based Actor Critic NeuralNetworkThe aim of the Actor-Critic Neural Network [30] is togenerate the SR image using the ith frame of the LR video.The individual frames from the LR video are given as inputto the ACNN that transforms the LR image to the HRimage without any degradation in the quality of the image.ACNN consists of two modules, namely actor and criticmodules that form the SR image. The advantage of ACNNis that it is adaptive and ACNN is a learning-basedapproach for forming the SR image. Each of the modulesin ACNN is developed using the input layer, hidden layer,and the output layer and the actor module predicts theoutput that forms the input to the critic module. ACNN isadaptive due to the adaptive weights computed using theproposed F-SCA. The input to the actor module is the ithframe that is set with the scaling factor s multiplied withthe weights of the input layer to present the input to thehidden layer, and output is predicted as the output of theactor layer. The output of the actor layer along with theSR image is presented to the input layer of the criticmodule that performs the successive computation in thehidden and the output layers to derive the SR image. Inother words, the classifier obtains the pixel values of theSR image that is transformed based on the scaling factor.Figure 2 shows the architecture of Actor-Critic NeuralNetwork employed to generate the SR images. The ACNNinherits reinforcement learning capabilities.

4.2.1 Actor ModuleActor module is the main module of ACNN, and the actoremploys a parameterized method, named as a NeuralNetwork (NN), and the NN with the single hidden layerserves as an actor module. The input to the actor moduleis I that carries the information of the test statistics and

Journal of Digital Information Management Volume 17 Number 2 April 2019 91

Figure 2. The architecture of Actor-Critic Neural Network for SR image formation

the Eigenvalues of the channel model. Thus, thesuccessive computations engaged in the input and thehidden layers contribute to the output that forms the inputto the critic layer. Thus, the output from the actor layer isrepresented as,

O A(t) = w A(t) × σ (u A (t) × fiA (x, y)) (5)

where, u A (t) is the weight between the hidden and theoutput layers, w A(t) denotes the weight between the inputand the hidden layers. The input vector that is the ithvideo frame, denoted as, fi

A (x, y), forms the input to theactor module. The output of the actor module is denotedas, O A(t). The dimension of the ith frame is convertedfrom (x × y) to (a × b) ∈ [(s × x) × (s × y)] using the scalingfactor. This dimensionally converted ith frame fi using thescaling factor is the actual input to the actor layer. Thus,the dimension of the input vector is given as, (a × b). Theweights of the actor module are described as, W1 ∈ {wA(t),u A (t)}. The activation function depends on the hyperbolictangent function and is denoted as,

σ (z) = ez − e-z

ez + e-z (6)

The activation function is referred to as hyperbolic tangentfunction as is given in equation (6). The activation functionis denoted as, σ (z) for simplicity. W1 ∈ {wA(t), u A (t)}denotes the weight of the actor module.

4.2.2 Critic ModuleThe critic module is the module that generates the SRimage, and the input to the critic module is the outputfrom the actor module and the ith frame fi of dimension

(a × b) that denotes the SR image. The critic layerdetermines the unknown pixel value of the frame. Theadaptive weights are used to undergo a smooth predictionwith better accuracy. The output from the critic layer isrepresented as,

O B(t) = w B(t) × σ (u B (t) × fiB (x, y)) (7)

O B(t) = w B(t) × σ B (t) (8)

where, fiB (a, b) = ⎣ fi

A (a, b), O A(t)⎦ is the input to the criticmodule, u B (t) is the weight between the input and thehidden layers, and w B(t) is the weight between the hiddenand the output layers. The weights of the critic layer aredenoted as, W2 ∈ {w B(t), u B (t)}. The output from the criticlayer generates the unknown pixel values of the framethat denote the SR image, and the output of the criticlayer is denoted as, O B(t).

4.2.3 Proposed F-SCA for tuning the Optimal Weightsof ACNNThe main role of the proposed algorithm is to generatethe optimal weights that change adaptively such that theSR image is generated effectively. The proposed F-SCAalgorithm is the integration of SCA [28] and the fractionalconcept [32] such that the proposed F-SCA inherits theadvantages of both the SCA and the fraction concept.The fractional concept possesses the tendency to avoidthe convergence to the local optimum and causes theconvergence to the global optimal solution. The fractionaltheory in the proposed algorithm increases itsconvergence rate and improves the performance of theSCA. Moreover, the fractional theory keeps a record ofthe past events and hence, the proposed F-SCA holds

92 Journal of Digital Information Management Volume 17 Number 2 April 2019

the inherent memory property. The proposed F-SCAalgorithm converges fast to the global optimal solutionand provides a better optimization experience.

SCA [28] is a population-based optimization algorithm,and they begin the optimization with the generation of therandom solutions that are verified using the objectivefunction. The performance of the algorithm is enhancedusing a set of rules that progress the optimizationalgorithm. The population-based algorithms look for theoptimal solution and hence, the solution cannot begenerated in a single iteration. The generation of randomsolutions and the iteration contribute to the chances ofconverging to the global optimal solutions. The algorithmconsists of two phases, such as the exploration phaseand the exploitation phase. In the exploration phase, therandom solutions are generated, and they changecontinuously based on the objective functions in thesuccessive iterations. The degree of randomness is highin case of the exploration phase such that the selectionof the prominent areas in the search space requires thehigher degree of the randomness. The random process isless in the exploration phase. Therefore, with the randomset of solutions, the probability of determining the globaloptimal solution increases. The advantage of the SCA isthat it is capable of solving the real optimization problemswith the unknown search spaces and the algorithm usesthe sine and cosine functions for exploring and exploitingthe solution between the search spaces with the aim ofconverging to the best solution. The algorithmic steps areas follows:

Initialization: The weights of ACNN are initializedrandomly, and they are iterated for the global optimalweights that ensure the effectiveness of the ACNNclassifier. The initialization process follows the initializationof SCA, wherein the solutions are represented as, zh; 1 ≤h ≤ , where is the size of the population.

Evaluate the Objective Function: The objective functionaims at the minimization problem that is based on theerror of the network. The objective function of all the searchagents is updated, and the search agent with the minimumvalue of the objective function is chosen as the best searchagent, and the position is updated. The search agentreferred corresponds to the weights of ACNN.

where, ρ denotes the objective function. The output ofACNN and the ground truth is given as, O B(t) and OGround(t),respectively.

Update the parameters: The main parameters in SCAare χ1, χ2, χ3 and χ4 , and the parameter χ1 symbolizes themovement direction or in other words, it indicates the nextposition regions and the direction may be either betweenthe source and the destination or outer. The parameter χ2defines if the movement is towards or away from the des-

ρ = Σ O B(t) − OGround (t) (9)

tination. The third random parameter χ3 defines the ran-dom weights for the destination to stochastically empha-size and deemphasize the desalination impacts that de-fine the distance. The last random parameter, χ4 switchesbetween the sine and the cosine components. The nameSCA is due to the switching between the sine and thecosine components.

Update the Position using the proposed F-SCA: Theproposed algorithm to update the position of the searchagent is the modification of the SCA with the fractionalconcept. The position update follows two conditions: Thetwo conditions are with respect to the fourth random pa-rameter χ4. The position of the search agent when therandom number lies below the value 0.5 is given as,

Z(t + 1) = Z (t) + χ1 × sin (χ2) × |χ3 .M t − Z (t)| (10)

where, Z(t + 1) denotes the position of the search agent inthe (t + 1)th iteration, χ1, χ2, χ3 and χ4 are the random num-bers. The position of the destination search agent in de-noted as, Mt and || stands for the absolute value. Therandom number χ4 varies in the range 0 and 1. The stan-dard equation of SCA is modified with the fractional theory,and the equation modification is inherited below. The frac-tional terms used in the equation use the history for theposition update of the search agent at any iteration, andit enhances the performance optimization process. More-over, the convergence of the algorithm is improved, andthe global optimal solution is derived. The equation (10) isrewritten as,

Z(t + 1) − Z (t) = χ1 × sin(χ2) × χ3.M t − χ1 × sin(χ2) × Z (t)| (11)

Dβ [Z(t + 1) ]= χ1× sin (χ2) × χ3.M t − χ1 × sin (χ2) × Z (t) (12)

where, Dβ [Z(t + 1)] indicates the differential componentand it is the difference between the position of the searchagent in the current and the previous iterations. Based onthe fractional concept, the fractional theory is given as,

Z(t + 1) −β. Z (t) −12

× β × Z(t − 1) − 16

× (1−β ) × Z (t − 2)−

124 × β × (1−β ) × (2−β ) × Z (t − 3) =

χ1 × sin (χ2) × χ3 . Mt − χ1 × sin (χ2) × Z (t) (13)

where, β is the fractional coefficient constant. Thus, theposition of the search agent is given as,

Z(t + 1) = β. Z (t) + 2× β × Z(t − 1) + 6 × (1−β ) × Z (t − 2)−

124

× β × (1−β ) × (2−β ) × Z (t − 3) +

χ1 × sin (χ2) × χ3 . Mt − χ1 × sin (χ2) × Z (t) (14)

1 1

Journal of Digital Information Management Volume 17 Number 2 April 2019 93

Z(t + 1) = Z (t) + 2 × β × Z(t − 1) +

6× (1−β ) × Z (t − 2) + 1

24 × β × (1−β ) × (2−β ) × Z (t − 3) +

χ1 × sin (χ2) × χ3 . Mt (15)

1

1

[β − χ1 × sin (χ2)] +

The position of the search agent is updated based on theposition of the search agent at time t, fractional coefficientβ, and the position of the search agents in the previousiterations. The above equation is employed for updatingthe position of the search agent when the position of thedestination is better than the position of the search agent,i.e., when Mt > Z(t). The cosine component of the standardSCA is given as,

Z(t + 1) = Z (t) + χ1 × cos (χ2) × |χ3 .M t − Z (t)| (16)

The equation (16) is rewritten as,

Z(t + 1) − Z (t) = χ1 × cos (χ2) × χ3 .M t −

χ1 × cos (χ2) × Z (t) (17)

Dβ [Z(t + 1) ]= χ1× cos (χ2) × χ3.M t − χ1 × cos (χ2) × Z (t) (18)

The integration of the fractional concept in the aboveequation is given as,

Z(t + 1) −β. Z (t) −12

× β × Z(t − 1) − 16

× (1−β ) × Z (t − 2)−

124 × β × (1−β ) × (2−β ) × Z (t − 3) =

χ1 × cos (χ2) × χ3 . Mt − χ1 × cos (χ2) × Z (t) (19)

Z(t + 1) = β. Z (t) + 2× β × Z(t − 1) + 6 × (1−β ) × Z (t − 2)−

124

× β × (1−β ) × (2−β ) × Z (t − 3) +

χ1 × cos (χ2) × χ3 . Mt − χ1 × cos (χ2) × Z (t) (20)

1 1

Z(t + 1) = Z (t) + 2 × β × Z(t − 1) +

6× (1−β ) × Z (t − 2) + 1

24 × β × (1−β ) × (2−β ) × Z (t − 3) +

χ1 × cos (χ2) × χ3 . Mt (21)

1

1

[β − χ1 × cos (χ2)] +

Figure 3. Pseudo code of the proposed F-SCA

The position update equations can be employed for thesearch spaces with larger dimensions and the cyclictransition between the sine and cosine function allowsthe convergence to the best solution and enables theexploitation of the solution between the two solutions.The search space is even discovered out of the spacebetween the destinations at the time of exploration. Thus,a proper balance between the exploration and theexploitation phases of the search space is obtained suchthat the global optimal solution is derived. The randomnumber χ1 is based on the current iteration and themaximum number of the iterations. The search spaceexploitation is defined in the interval [-1, 1].

The optimization process is progressed with the generationof the random solution and processes the solutions forexploring the best solution. The best solution determinedis derived and is saved as the destination point such thatthe solutions of the successive iterations are updatedbased on the destination point. Moreover, the sine andthe cosine range is updated with the increase in theiteration number, and the process is repeated for the

94 Journal of Digital Information Management Volume 17 Number 2 April 2019

maximum number of the iterations. The generation of theglobal optimal solution is due to the following reasons:The random solutions generated are improved with theincrease in the iteration count and the various searchspaces are exploited when the sine and cosine spacesexceed 1 and -1 that allows the adaptive transition. TheSR image obtained using ACNN is notated as, f2

i(g, h).Figure 3 shows the pseudo code of the proposed algorithm.At first, the weights of ACNN and the population of SCAare initialized. Then, the objective function is calculatedfor every search agent and the SCA parameters areupdated. Afterthat, the position of the search agents areupdated using equation (21). This process is repeateduntil the maximum iteration reaches. Finally, the bestsolution is determined.

4.3 Generation of the SR image using the Hybrid SVR-ACNN ModelThe SR images obtained using both the SVR model andACNN are subjected to the weighted average to form theSR image. The video enhanced output using the proposedhybrid SVRACNN model assures the SR video with goodquality, and it highlights that there is no degradation inthe quality of the video. The SR image obtained using theproposed model is given as,

Si(g, h) = β × f1i(g, h) + (1− β) × f1

i(g, h) (22)

where, β represents the constant and Si(g, h) is the SRimage. The dimension of the SR image Si(g, h) is (a × b) ∈[(x × s) × (y × s)].

5. Results and Discussion

The section deliberates the results and discussion of theproposed method of video resolution along with thecomparative analysis in order to prove the effectivenessof the proposed method.

5.1 Experimental SetupThe software tool used for the implementation of theproposed technique is MATLAB that operates in the

Windows 8 operating system.

5.2 Dataset DescriptionThe database employed for the experimentation is takenfrom [29]. The Cambridge-driving Labeled Video Database(CamVid) [29] is the first collection of videos with objectclass semantic labels, complete with metadata. Thedatabase provides ground truth labels that associate eachpixel with one of 32 semantic classes.

5.3 Experimental AnalysisThe experimental analysis of the proposed method ishighlighted in this section. Figure 4, figure 5, figure 6, andfigure 7 shows the sample results of the experiment. Threevideos, video 1, video 2, and video 3 and three frameseach from those videos are considered for the analysis.The input frames are depicted in figure 4 a), figure 4 b),and figure 4 c). The HR images obtained for the scalingfactors 2, 3, and 4 are depicted in figure 5, figure 6, andfigure 7, respectively. The HR images obtained using theproposed method for the frame 1, frame 2, and frame 3,respectively, for the scaling factor 2 are given in figure 5a), figure 5 b), and figure 5 c), respectively. The HR imagesobtained using the proposed method for the three frameswith the scaling factor 3 are given in figure 6 a), figure 6b), and figure 6 c), respectively. Similarly, the HR imagesobtained for the three frames with the scaling factor 4 aregiven in figure 7 a), figure 7 b), and figure 7 c), respectively.

5.4 Performance MetricsThe performance metrics employed for the analysis includethe following:

i) Peak Signal-to-Noise Ratio (PSNR): PSNR is themeasure that determines the quality of the image and thebetter method assures the maximum value of PSNR. PSNRis represented in decibel (dB).

ii) Second Derivative like Measure of Enhancement(SDME): The SDME metric identifies the visual quality ofthe enhanced image by calculating the second ordermeasure. The expression for the SDME metric is

Figure 4. Sample results of the experiment a) LR image from Video 1 b) LR image from Video 2 c) LR image from Video 3

Journal of Digital Information Management Volume 17 Number 2 April 2019 95

Figure 5. HR images using the scaling factor 2 a) for video 1 b) for video 2 c) for video 3

Figure 6. HR images using the scaling factor 3 a) for video 1 b) for video 2 c) for video 3

Figure 7. HR images using the scaling factor 4 a) for video 1 b) for video 2 c) for video 3

expressed as follows,

SDME = − 1v1×v2

Σ Σ 20 lnv1 v2

i=1 j=1

Ki,j − 2Ki,j + Ki,j

max cen min

Ki,j + 2Ki,j + Ki,j

max cen min

(23)

where, Ki,j , Ki,j , Ki,j refer to the maximum pixel value,minimum pixel value, and the center pixel value,respectively, v1 and v2 indicate the image blocks.

max min cen

iii) Structural Similarity (SSIM) Index: SSIM is ameasure for predicting the perceived quality of the SRimage and the effective method assures the maximumvalue of SSIM.

5.5 Performance AnalysisThis subsection shows the performance analysis of theproposed method by varying the hidden neurons.

5.5.1 Performance Analysis based on PSNR

96 Journal of Digital Information Management Volume 17 Number 2 April 2019

Figure 8 shows the performance analysis of the proposedmethod based on PSNR for video 1, video 2, and video 3.Figure 8 (a) shows the PSNR of the proposed method forvideo 1. For the scaling factor 4, the proposed Hybrid SVR-ACNN with neurons 5 has the PSNR of 31.69dB. Theproposed classifier has the PSNR of 31.86dB, 31.85dB,and 31.96dB when the number of neurons is 10, 15, and20, respectively, for the scaling factor 4. The PSNR of theproposed method using video 2 is provided in figure 8 (b).The proposed classifier has the PSNR of 31.44dB,31.65dB, 31.44dB, and 31.48dB when the number of hiddenneurons is 5, 10, 15, and 20, respectively, for the scalingfactor 4. Similarly, the PSNR curve of the proposedclassifier for video 3 is provided in figure 8 (c). For thescaling factor 3, the proposed classifier has the PSNR of31.36dB, 31.08dB, 31.55dB, and 31.54dB, when thenumber of hidden neurons is 5, 10, 15, and 20.

Figure 8. Performance analysis based on PSNR (a) video 1 (b) video 2 (c) video 3

(a)

5.5.2 Performance Analysis based on SDMEFigure 9 shows the performance analysis of the proposedmethod based on SDME for video 1, video 2, and video 3.Figure 9 (a) shows the SDME of the proposed method forvideo 1. For the scaling factor 2, the proposed HybridSVR-ACNN has the SDME of 50.67dB, 50.68dB, 51.03dB,and 50.55dB, respectively, when the number of hiddenneurons is 5, 10, 15, and 20. Figure 9 (b) depicts theSDME curve of the proposed classifier for video 2. Theproposed classifier has the SDME of 46.54dB, 46.78dB,46.69dB, and 46.77dB when the number of hidden neuronsis 5, 10, 15, and 20, respectively, for the scaling factor 3.The SDME curve of the proposed classifier for video 3 isprovided in figure 9 (c). For the scaling factor 4, the proposedclassifier has the SDME of 54.59dB, 54.80dB, 54.56dB,and 54.92dB, when the number of hidden neurons is 5,10, 15, and 20.

(b)

(c)

Journal of Digital Information Management Volume 17 Number 2 April 2019 97

Figure 9. Performance analysis based on SDME (a) video 1 (b) video 2 (c) video 3

(a) (b)

(c)

(a) (b)

98 Journal of Digital Information Management Volume 17 Number 2 April 2019

5.5.3 Performance Analysis based on SSIMThe performance analysis of the proposed method basedon SSIM for video 1, video 2, and video 3 is shown inFigure 10. Figure 10 (a) shows the SSIM curve of theproposed classifier for video 1. For the scaling factor 4,the proposed Hybrid SVR-ACNN classifier has the SSIMof 0.90 when the number of hidden neuron is 5, and theproposed classifier has the SSIM of 0.91, when the numberof hidden neurons is 10, 15, and 20.

Figure 10 (b) depicts the SSIM curve of the proposedclassifier for video 2. For the scaling factor 3, the proposedclassifier has the SSIM of 0.92 when the number of hiddenneurons is 5, 10, and 15. When the number of hiddenneurons is 20, the proposed classifier has the SSIM of0.91 for the scaling factor 3. The SSIM curve of theproposed classifier for video 3 is provided in figure 10 (c).For the scaling factor 4, the proposed classifier has theSSIM of 0.92, when the number of hidden neurons is 5,10, 15, and 20.

5.6 Competing MethodsThe methods employed for the analysis include ACNN[30], Recurrent Residual Network (RNN) [3], ConvolutionalNeural Network (CNN) [1], KNN, and Patch-baseddenoising [9]. The performance of these methods iscompared with that of the proposed method in order toprove the effectiveness of the proposed method.

5.7 Comparative AnalysisThis section presents the comparative discussion of thevideo SR methods based on the three evaluation metrics.

PSNR: The analysis based on the PSNR of the methodsis progressed using three videos that are depicted in figure11. Figure 11 (a) shows the analysis using video 1. For

(c)

Figure 10. Performance analysis based on SSIM (a) video 1 (b) video 2 (c) video 3

the scaling factor 4, PSNR of the proposed hybrid SVR-ACNN, ACNN, RNN, CNN, KNN, and Patchbaseddenoising is 33.644dB, 21.79dB, 19dB, 21.53dB,32.21dB, and 23.98dB, respectively. Figure 11 (b) showsthe analysis using video 2. For the scaling factor 4, PSNRof the proposed hybrid SVR-ACNN, ACNN, RNN, CNN,KNN, and Patch-based denoising is 31.3646dB, 13.90dB,20.66dB, 11.45dB, 21.52dB, and 24.09dB, respectively.The proposed method exhibited the greater value of thePSNR. Figure 11 (c) shows the analysis using video 3.For the scaling factor 4, PSNR of the proposed hybridSVR-ACNN, ACNN, RNN, CNN, KNN, and Patch-baseddenoising is 30.974dB, 12.619dB, 21.07dB, 10.3033dB,20.4912dB, and 23.88dB, respectively. For all the scalingfactors considered, the proposed method exhibited thegreater value of the PSNR.

SDME: The analysis based on the SDME of thecomparative methods is progressed using three videosand is depicted in figure 12. Figure 12 (a) shows the SDMEanalysis using video 1. For the scaling factor 4, SDME ofthe proposed hybrid SVR-ACNN, ACNN, RNN, CNN, KNN,and Patch-based denoising is 45.277dB, 36.651dB,38.0591dB, 43.0591dB, 44.7630dB, and 37.8764dB,respectively. The proposed method exhibited the greatervalue of the SDME than the existing methods. Figure 12(b) presents the SDME analysis using video 2. For thescaling factor 4, the SDME of the proposed hybrid SVR-ACNN, ACNN, RNN, CNN, KNN, and Patch-baseddenoising is 43.7399dB, 37.866dB, 39.294dB, 42.583dB,43.216dB, and 41.239dB, respectively. Figure 12 (c) showsthe analysis based on SDME using video 3. For the scalingfactor 4, the SDME of the proposed hybrid SVR-ACNN,ACNN, RNN, CNN, KNN, and Patch-based denoising is53.7979dB, 29.75dB, 41.13dB, 41.8065dB, 41.965dB, and43.866dB, respectively. The proposed method exhibited

Journal of Digital Information Management Volume 17 Number 2 April 2019 99

Figure 11. Comparative analysis based on PSNR (a) video 1 (b) video 2 (c) video 3

(a) (b)

(c)

(a) (b)

100 Journal of Digital Information Management Volume 17 Number 2 April 2019

Figure 12. Comparative analysis based on SDME (a) video 1 (b) video 2 (c) video 3

(c)

Figure 13. Comparative analysis based on SSIM (a) video 1 (b) video 2 (c) video 3

(a) (b)

(c)

Journal of Digital Information Management Volume 17 Number 2 April 2019 101

the greater value of the SDME for all the videos irrespectiveof the scaling factor.

SSIM: The analysis of the comparative methods basedon the SSIM is progressed using three videos, as depictedin figure 13. Figure 13 (a) shows the analysis using video1. For the scaling factor 4, the SSIM of the proposedhybrid SVR-ACNN, ACNN, RNN, CNN, KNN, and Patch-based denoising is 0.9398, 0.9045, 0.7404, 0.9297,0.9113, and 0.5422, respectively. Figure 13 (b) shows theanalysis using video 2. For the scaling factor 4, the SSIMof the proposed hybrid SVR-ACNN, ACNN, RNN, CNN,KNN, and Patch-based denoising is 0.9190, 0.8157,0.9439, 0.8007, 0.7688, and 0.6226, respectively. Figure13 (c) shows the analysis using video 3. For the scalingfactor 4, SSIM of the proposed hybrid SVRACNN, ACNN,RNN, CNN, KNN, and Patch-based denoising is 0.8956,0.7297, 0.6498, 0.7036, 0.6754, AND 0.6182, respectively.From figure 13, it is seen that the proposed methodexhibited the greater value of the SSIM.

5.8 Comparative DiscussionThe comparative analysis of the SR methods providingthe maximum performance is given in table 1. The PSNRof the proposed hybrid SVR-ACNN, ACNN, RNN, CNN,KNN, and Patch-based denoising is 33.6444 dB, 21.79351dB, 19 dB, 21.5312 dB, 32.21 dB, and 23.98 dB. TheSSIM of ACNN and RNN is 0.9045, and 0.7404, whereasthat of the proposed hybrid SVR-ACNN is 0.9398. Themaximum SDME attained by the proposed hybridSVRACNN is 45.2779dB when the existing KNN couldprovide the SDME of 44.763 dB. The proposed methodacquired a greater value of PSNR, SSIM, and SDME,respectively.

6. Conclusion

Video Resolution image generated using the proposedmodel is a high-quality video that is applicable to theprocesses associated with high video applications. Thehybrid model has been developed using the SVR modeland the Actor critic neural network that is merged togetherbased on the weighted average concept. The tuning ofthe ACNN is based on the newly devised algorithm, F-

Methods PSNR (dB) SSIM SDME (dB)

Proposed Hybrid SVR-ACNN 33.64447 0.9398 45.2779

ACNN 21.79351 0.9045 36.6517

RNN 19 0.7404 38.0591

CNN 21.5312 0.9297 43.0591

KNN 32.21 0.9113 44.763

Patch-based denoising 23.98 0.5422 37.876

Table 1. Comparative analysis using the video super-resolution methods

based SCA, which possess the capacity to compute theweights based on the past records and holds a fasterconverging property that converges to the global optimalsolution. The proposed model of video enhancementaddresses the demerits of the existing SR methods,offering an effective SR methodology. The experimentationis performed using the Cambridge-driving Labeled VideoDatabase (CamVid), and the effectiveness of the proposedmethod is analyzed based on PSNR, SDME, and SSIM,respectively. The proposed method outperforms theexisting SR methods with the maximum PSNR of33.6447dB, maximum SDME of 45.2779dB, and maximumSSIM of 0.9398, respectively.

References

[1] Kappeler, Armin., Yoo, Seunghwan., Dai, Qiqin.,Aggelos, Katsaggelos, K. (2016). Video super-resolutionwith convolutional neural networks, IEEE Transactionson Computational Imaging, 2 (2) 109-122.

[2] Li, Kun., Zhu, Yanming., Yang, Jingyu., Jiang, Jianmin.(2016). Video super-resolution using an adaptivesuperpixel-guided auto-regressive model, PatternRecognition, 51, p. 59- 71, 2016.

[3] Yang, Wenhan., Feng, Jiashi., Xie, Guosen., Liu,Jiaying., Guo, Zongming., Yan, Shuicheng. (2017). Videosuper-resolution based on spatial-temporal recurrentresidual networks, Computer Vision and ImageUnderstanding.

[4] Li, Dingyi., Wang, Zengfu. (2017). Video Super-Resolution via Motion Compensation and Deep ResidualLearning, IEEE Transactions on Computational Imaging,3 (4) 749-762.

[5] Amar, B., Deshmukh, Usha Rani, N. (2017). Fractional-Grey Wolf optimizer-based kernel weighted regressionmodel for multi-view face video super resolution,International Journal of Machine Learning and Cybernetics,p.1-19, 23 December 2017.

[6] Li, Yawei., Li, Xiaofeng., Fu, Zhizhong. (2017). Modifiednon-local means for superresolution of hybrid videos,Computer Vision and Image Understanding, 2 December2017.

102 Journal of Digital Information Management Volume 17 Number 2 April 2019

[7] Huang, Yan., Wang, Wei., Wang, Liang. (2017). VideoSuper-Resolution via Bidirectional Recurrent ConvolutionalNetworks, IEEE Transactions on Pattern Analysis andMachine Intelligence, 99, p.1-1.

[8] Chen, Di., He, Xiaohai., Chen, Honggang., Wang,Zhengyong., Zhang, Yijun. (2016). Video superresolutionusing joint regularization, In: Proceedings of the IEEE13th International Conference on Signal Processing(ICSP)p.668 - 672.

[9] Buades, Antoni., Lisani, Jose-Luis., Miladinovic, Marko.(2016). Patch-based video denoising with optical flowestimation, IEEE Transactions on Image Processing, 25(6) 2573-2586.

[10] Wang, Jen-Wen., Chiu, Ching-Te. (2017). VideoSuper-resolution using Edge-based Optical Flow andIntensity Prediction, Journal of Signal ProcessingSystems, p.1-13.

[11] Zhang, Xinfeng., Xiong, Ruiqin., Ma, Siwei., Li, Ge.,Gao, Wen. (2015). Video super-resolution with registration-reliability regulation and adaptive total variation, Journalof Visual Communication and Image Representation, 30,p.181-190, July 2015.

[12] Hiding Yunbo Rao, Chen, Leiting. (2012). A Survey ofVideo Enhancement Techniques, Journal of Informationand Multimedia Signal Processing, 3 (1) 71-99, January2012.

[13] Bhagya, H. K., Keshaveni, N. (2016). Review on videoenhancement techniques, International Journal ofEngineering Science Invention Research & Development,3 (2) August 2016.

[14] Zhang, Z., Sze, V. (2016). Fast: Free adaptive super-resolution via transfer for compressed videos, arXiv preprintarXiv:1603.08968, 2016.

[15] Jin, Y., Bouganis, C. S. (2015). Robust multi-imagebased blind face hallucination, In: Proceedings of the IEEEConference on Computer Vision and Pattern Recognition(CVPR), 2015, p. 5252–5260.

[16] Zhang, L., Zhang, H., Shen, H., Li, P. (2010). A super-resolution reconstruction algorithm for surveillance images,Signal Process., 90 (3) 848–859.

[17] Zhong, Y., Zhang, L. (2012). Remote sensing imagesubpixel mapping based on adaptive differential evolution,IEEE Transactions on Systems, Man, and CyberneticsB, Cybern, 42 (5) 1306–1329.

[18] Wallach, D., Lamare, F., Kontaxakis, G., Visvikis,D. (2012). Superresolution in respiratory synchronizedpositron emission tomography, IEEE Transactions onMedical Imaging, 31(2) 438–448.

[19] Zhang, J., Cao, Y., Zha, Z. J., Zheng, Z., Chen, C.W., Wang, Z. (2016). A unified scheme for uper-resolutionand depth estimation from asymmetric stereoscopic video,IEEE Transactions on Circuits and Systems for VideoTechnology, 26 (3) 479–493.

[20] Yue, L., Shen, H., Li, J., Yuan, Q., Zhang, H., Zhang,

L. (2016). Image superresolution: The techniques,applications, and future, Signal Process, 128, p. 389–408.

[21] Freeman, W. T., Jones, T. R., Pasztor, E. C. (2002).Example-based superresolution, IEEE Computer Graphicsand Applications, 22 (2) 56–65.

[22] Chang, H., Yeung, D.-Y., Xiong, Y. (2004). Super-resolution through neighbour embedding, In: Proceedingsof the IEEE conference on computer vision and patternRecognition, 1, 2004, p. 275– 282.

[23] Yang, J., Wright, J., Huang, T., Ma, Y. (2008). Imagesuper-resolution as sparse representation of raw imagepatches, In: Proceedings of the IEEE conference oncomputer vision and pattern Recognition, 2008, p. 1–8.

[24] Timofte, R., De Smet, V., Van Gool, L. (2013).Anchored neighbourhood regression for fast example-based super-resolution, In: Proceedings InternationalConference on Computer Vision, 2013, p. 1920–1927.

[25] Dong, C., Loy, C. C., He, K., Tang, X. (2014). Learninga deep convolutional network for image super-resolution,In: Proceedings European Conference on ComputerVision, 2014, p. 184–199.

[26] Liu, C., Sun, D. (2014). On bayesian adaptive videosuper resolution. IEEE Transactions on Pattern Analysisand Machine Intelligence, p. 346–360.

[27] Williams, R. J., Zipser, D. (1989). A learning algorithmfor continually running fully recurrent neural networks.Neural Computation, 1 (2) 270–280.

[28] Mirjalili, Seyedali. (2016). SCA: A Sine CosineAlgorithm for solving optimization problems, Knowledge-Based Systems, 96, p. 120-133, March 2016.

[29] Motion-based Segmentation and Recognition Datasettaken from http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/, accessed on December 2017.

[30] Zhao, Dongbin., Wang, Bin., Liu, Derong. (2013). Asupervised Actor–Critic approach for adaptive cruisecontrol, Soft Computing, 17 (11) 2089–2099, November2013.

[31] Karl, S., Ni, Truong Q., Nguyen. (2007). ImageSuperresolution Using Support Vector Regression, IEEETransactions on Image Processing, 16 (6)1596 - 1610.

[32] Pawan, R., Bhaladhare, Devesh, C., Jinwala. (2014).A Clustering Approach for the -Diversity Model in PrivacyPreserving Data Mining Using Fractional Calculus-Bacterial Foraging Optimization Algorithm, Advances inComputer Engineering, 2014, p.1-12.

[33] Daga, B. S., Ghatol, A. A. (2016). Detection of Objectsand Activities in Videos using Spatial Relations andOntology Based Approach in Video Database System,International Journal of Advances in Engineering &Technology, 9 (6) 640- 650.

[34] Diamantini, Claudia., Potena, Domenico. (2009).Bayes vector quantizer for class-imbalance problem, IEEE

Journal of Digital Information Management Volume 17 Number 2 April 2019 103

case of carbon nanotubes and nanoribbons, PhysicalReview B, 95 (12).

[38] Palumbo, Paola., Miconi, Gianfranca., Cinque,Benedetta., Lombardi, Francesca., Cristina La Torre,Soheila Raysi Dehcordi, Galzio, Renato., Cimini,Annamaria., Giordano, Antonio., Maria Grazia Cifone,(2017). NOS2 expression in glioma cell lines and gliomaprimary cell cultures: correlation with neurospheregeneration and SOX-2 expression, Oncotarget, 8 (15).

[39] Valsalan, Prajoona., Shibi O Manimegalai, ShineAugustine, P. (2017). Non invasive estimation of bloodpressure using a linear regression model from thephotoplethysmogram (PPG) signal, Perspectivas emCiencia da Informacao, 22 (4).

[35] Schietroma, Mario., Piccione, Federica., Clementi,Marco., Emanuela Marina Cecilia, Sista, Federico.,Pessia, Beatrice., Carlei, Francesco., Guadagni, Stefano.,Amicucci, Gianfranco. (2017). Short-and long-term, 11–22 Years, results after laparoscopic nissen fundoplicationin obese versus nonobese patients, Journal of Obesity.

[36] Di Furia, Marino., Della Penna, Andrea., Salvatorelli,Andrea., Marco, Clementi,. Stefano, Guadagni. (2017).A single thyroid nodule revealing early metastases fromclear cell renal carcinoma: case report and review ofliterature, International Journal of Surgery Case Reports,34, p. 96-99.

[37] Attaccalite, Claudio., Cannuccia, E, Grüning, M.(2017). Excitonic effects in third-harmonic generation: The