a survey on temporal task scheduling for profit ... survey on temporal task scheduling for profit...
TRANSCRIPT
20 M. Manikandan, M.Suguna
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 6, Issue 1
January 2017
A Survey on Temporal Task Scheduling for Profit
Maximization in Hybrid Clouds
M. Manikandan M.Suguna
M.E.,Scholar(CSE) Asst, Professor(GR-II)
Kumaraguru College Of Technology Kumaraguru College Of Technology
Coimbatore, India Coimbatore, India
ABSTRACT
Cloud Computing is a novel paradigm for the
provision of computing infrastructure, which aims to
shift the location of the computing infrastructure to the
network in order to reduce the costs of management and
maintenance of hardware and software resources. Cloud
computing has a service-oriented architecture in which
services are broadly divided into three categories:
Infrastructure-as-a- Service (IaaS), which includes
equipment such as hardware, Storage, servers, and
networking components are made accessible over the
Internet; Platform-as-a-Service (PaaS), which includes
hardware and software computing platforms such as
virtualized servers, operating systems, and the like and
Software-as-a-Service (SaaS), which includes software
applications and other hosted services.
To obtain accurate estimation of the complete
probability distribution of the request response time and
other important performance indicators. The model
allows cloud operators to determine the relationship
between the number of servers and input buffer size, on
one side, and the performance indicators such as mean
number of tasks in the system, blocking probability, and
probability that a task will obtain immediate service, on
the other. Therefore, it is possible that a private cloud
provider cannot satisfy all arrival tasks with its limited
resources if the arrival tasks are massive. The existing
works usually provide an admission control mechanism
to refuse some of arrival tasks that exceed the capacity of
a private cloud. Nevertheless, this will decrease the
throughput of a private cloud, and inevitably cause
revenue loss to the private cloud provider.
KEYWORDS: cloud computing ,datacenters, big data,
task scheduling
1.INTRODUCTION
Cloud computing can efficiently provide
on-demand computing resources over the network
to consumers worldwide. Typically, computing
resources in cloud data centers are dynamically
delivered to consumers using a pay-as-you-go
pricing model. In addition, the economy of scale
brought by cloud computing attracts an increasing
number of companies to deploy their applications in
cloud data centers. As a typical part of cloud,
Infrastructure as a Service (IaaS) provides the
foundation for applications. Typical IaaS providers
such as Rack space and AmazonEC2 provide
services to consumers based on a pay-per-use
model. An IaaS provider manages its own limited
resources. Therefore, similar to the definition from
the perspective of an IaaS provider, private cloud in
this paper denotes a resource-constrained IaaS
provider that may outsource some of its tasks to
execute in external public clouds when it cannot
deliver promised quality-of-service (QoS) with its
resources.
A private cloud provider aims to provide
services to consumers‘ tasks in the most cost-
effective way while guaranteeing the specified QoS.
Therefore, profit maximization is a critically
important goal for a private cloud provider. The
uncertainty and aperiodicity of arrival tasks makes
it difficult top redo the future arrival tasks, and
brings a major challenge to operators of a private
cloud. Therefore, it is possible that a private cloud
provider cannot satisfy all arrival tasks with its
limited resources if the arrival tasks are massive.
The existing works usually provide an admission
control mechanism to refuse some of arrival tasks
that exceed the capacity of a private cloud.
Nevertheless, this will decrease the throughput of a
private cloud, and inevitably cause revenue loss to
the private cloud provider. However, the
mechanism of hybrid clouds enables a private cloud
provider to make use of public clouds where
21 M. Manikandan, M.Suguna
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 6, Issue 1
January 2017
resources are delivered in the form of virtual
machines (VMs).
2.LITERATURE SURVEY
Linlin Wu, Saurabh Kumar Greg[3] porposed SLA-
Based Resource Provisioning for Hosted Software-
as-a-Service Applications in Cloud Computing
Environments
Cloud computing is a solution for addressing
challenges such as licensing, distribution,
configuration, and operation of enterprise
applications associated with the traditional IT
infrastructure, software sales and deployment
models. Migrating from a traditional model to the
Cloud model reduces the maintenance complexity
and cost for enterprise customers, and provides on-
going revenue for Software as a Service (SaaS)
providers. Clients and SaaS providers need to
establish a Service Level Agreement (SLA) to
define the Quality of Service (QoS). The main
objectives of SaaS providers are to minimize cost
and to improve Customer Satisfaction Level (CSL).
In this paper, we propose customer driven SLA-
based resource provisioning algorithms to minimize
cost by minimizing resource and penalty cost and
improve CSL by minimizing SLA violations. The
proposed provisioning algorithms consider
customer profiles and providers ‗quality parameters
(e.g., response time) to handle dynamic customer
requests and infrastructure level heterogeneity for
enterprise systems. We also take into account
customer-side parameters (such as the proportion of
upgrade requests), and infrastructure-level
parameters (such as the service initiation time) to
compare algorithms. Simulation results show that
our algorithms reduce the total cost up to 54 percent
and the number of SLA violations up to 45 percent,
compared with the previously proposed best
algorithm.
Dario Bruneo[4] introduced A Stochastic Model to
Investigate Data Center Performance and QoS in
IaaS Cloud Computing Systems .Cloud datacenter
management is a key problem due to the numerous
and heterogeneous strategies that can be applied,
ranging from the VM placement to the federation
with other clouds. Performance evaluation of Cloud
Computing infrastructures is required to predict and
quantify the cost-benefit of a strategy portfolio and
the corresponding Quality of Service (QoS)
experienced by users. Such analyses are not feasible
by simulation or on-the-field experimentation, due
to the great number of parameters that have to be
investigated. In this paper, we present an analytical
model, based on Stochastic Reward Nets (SRNs),
that is both scalable to model systems composed of
thousands of resources and flexible to represent
different policies and cloud-specific strategies.
Several performance metrics are defined and
evaluated to analyze the behavior of a Cloud data
center: utilization, availability, waiting time, and
responsiveness. A resiliency analysis is also
provided to take into account load bursts. Finally, a
general approach is presented that, starting from the
concept of system capacity, can help system
managers to opportunely set the data center
parameters under different working conditions.
A. Shahina Banu and W. R. Helen [5] introduced
Self-Adaptive Learning PSO-Based Deadline
Constrained Task Scheduling for Hybrid IaaS Cloud
Public clouds provide Infrastructure as a
Service(IaaS) to users who do not own sufficient
compute resources. IaaSachieves the economy of
scale by multiplexing, and therefore facesthe
challenge of scheduling tasks to meet the peak
demand whilepreserving Quality-of-Service (QoS).
Previous studies proposedproactive machine
purchasing or cloud federation to resolve
thisproblem. However, the former is not economic
and the latter fornow is hardly feasible in practice.
In this paper, we propose aresource allocation
framework in which an IaaS provider can out-
source its tasks to External Clouds (ECs) when its
own resourcesare not sufficient to meet the demand.
This architecture does notrequire any formal inter-
cloud agreement that is necessary for thecloud
federation. The key issue is how to allocate users‘
tasks tomaximize the profit of IaaS provider while
guaranteeing QoS. Thisproblem is formulated as an
integer programming (IP) model, andsolved by a
self-adaptive learning particle swarm optimization
(SLPSO)-based scheduling approach. In SLPSO,
four updatingstrategies are used to adaptively
update the velocity of each particle to ensure its
diversity and robustness. Experiments show that,
SLPSO can improve a cloud provider‘s profit by
0.25%–11.56%compared with standard PSO; and
by 2.37%–16.71% for problems of nontrivial size
compared with CPLEX under reasonable
computation time
22 M. Manikandan, M.Suguna
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 6, Issue 1
January 2017
Tejas Nitore ,Vishal Rale Ambi Talegaov[6] A
Profit Maximization Scheme with Guaranteed
Quality of Service in Cloud Computing. As an
effective and efficient way to provide computing
resources and services to customers on demand,
cloud computing has become more and more
popular. From cloud service providers‘ perspective,
profit is one of the most important considerations
,and it is mainly determined by the configuration of
a cloud service platform under given market
demand. However, a single long-term renting
scheme is usually adopted to configure a cloud
platform, which cannot guarantee the service
quality but leads to serious resource waste. In this
paper, a double resource renting scheme is designed
firstly in which short-term renting and long-term
rentingare combined aiming at the existing issues.
This double renting scheme can effectively
guarantee the quality of service of all requestsand
reduce the resource waste greatly. Secondly, a
service system is considered as
anM/M/m+Dqueuing model and the
performanceindicators that affect the profit of our
double renting scheme are analyzed, e.g., the
average charge, the ratio of requests that
needtemporary servers, and so forth. Thirdly, a
profit maximization problem is formulated for the
double renting scheme and the
optimizedconfiguration of a cloud platform is
obtained by solving the profit maximization
problem. Finally, a series of calculations are
conductedto compare the profit of our proposed
scheme with that of the single renting scheme. The
results show that our scheme can not onlyguarantee
the service quality of all requests, but also obtain
more profit than the latter.
Jianying Luo, Lei Rao, and Xue Liu[7] proposed
Temporal Load Balancing with Service Delay
Guarantees for Data Center Energy Cost
Optimization Cloud computing services are
becoming integral part of people‘s daily life. These
services are supported by infrastructure known as
Internet data center (IDC). As demand for cloud
computing services soars, energy consumed by
IDCs is skyrocketing. Both academia and industry
have paid great attention to energy management of
IDCs. This paper studies an important energy
management problem—how to minimize energy
cost for IDCs in deregulated electricity markets. We
propose a novel two-stage design and the eco-IDC
(Energy Cost Optimization-IDC) algorithm to
exploit the temporal diversity of electricity price
and dynamically schedule workload to execute on
IDC servers through an input queue. Extensive
evaluation experiments are performed using real-
life electricity price and workload traces at an
enterprise production data center. The evaluation
results demonstrate that the proposed approach
significantly reduces energy cost for IDCs,
guarantees a service delay bound, and alleviates
workload drop if the service delay bound is
sufficiently large.
Haitao Yuan, Jing Bi [8] introduced CAWSAC:
Cost-Aware Workload Scheduling and Admission
Control for Distributed Cloud Data Centers
Multiple heterogeneous applications concurrently
run in distributed cloud data centers (CDCs) for
better performance and lower cost. There is a highly
challenging problem of how to minimize the total
cost of a CDCs provider in a market where the
bandwidth and energy cost show geographical
diversity. To solve the problem, this paper first
proposes a revenue-based workload admission
control method to judiciously admit requests by
considering factors including priority, revenue and
the expected response time. Then, this paper
presents a cost-aware workload scheduling method
to jointly optimize the number of active servers in
each CDC, and the selection of Internet service
providers for the CDCs provider. Finally, trace-
driven simulation results demonstrate that the
proposed methods can greatly reduce the total cost
and increase the throughput of the CDCs provider in
comparison to existing methods.
Wenhong Tian[9]developed A Toolkit for Modeling
and Simulation of Real-time Virtual Machine
Allocation in a Cloud Data Center Resource
scheduling in infrastructure as a service (IaaS) is
one of the keys for large-scale Cloud applications.
Extensive research on all issues in real environment
is extremely difficult because it requires developers
to consider network infrastructure and the
environment, which may be beyond the control. In
addition, the network conditions cannot be
predicted or controlled. Therefore, performance
evaluation of workload models and Cloud
provisioning algorithms in a repeatable manner
under different configurations and requirements is
difficult.There is still lack of tools that enable
developers to compare different resource scheduling
23 M. Manikandan, M.Suguna
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 6, Issue 1
January 2017
algorithms in IaaS regarding both computing
servers and user workloads. To fill this gap in tools
for evaluation and modeling of Cloud environments
and applications, we propose CloudSched.
CloudSched can help developers identify and
explore appropriate solutions considering
differentresource scheduling algorithms. Unlike
traditional scheduling algorithms considering only
one factor such as CPU, which can cause hotspots
or bottlenecks in many cases, CloudSched treats
multi-dimensional resource such as CPU, memory
and network bandwidth integrated for both physical
machines and virtual machines for different
scheduling objectives (algorithms).In this paper,
two existing simulation systems at application level
for Cloud computing are studied, a novel
lightweight simulation system is proposed for real-
time virtual machine scheduling in Cloud data
centers, and results by applying the proposed
simulation system are analyzed and discussed.
Weijia Song, Zhen Xiao[10] proposed Adaptive
Resource Provisioning for the Cloud Using Online
Bin Packing Data center applications present
significant opportunities for multiplexing server
resources. Virtualization technology makes it easy
to move running application across physical
machines. In this paper, we present an approach that
uses virtualization technology to allocate data center
resources dynamically based on application
demands and support green computing by
optimizing the number of servers actively used. We
abstract this as a variant of the relaxed on-line bin
packing problem and develop a practical, efficient
algorithm that works well in a real system. We
adjust the resources available to eachVM both
within and across physical servers. Extensive
simulation and experiment results demonstrate that
our system achieves good performance compared to
the existing work.
Tan Lu and Minghua Chen [11] had worked on
Simple and Effective Dynamic Provisioning for
Power-Proportional Data Centers Energy
consumption represents a significant cost in data
center operation. A large fraction of the energy,
however, isused to power idle servers when the
workload is low. Dynamic provisioning techniques
aim at saving this portion of the energy, by turning
o_ unnecessary servers. In this paper, we explore
how much gain knowing future workload
information can bring to dynamic provisioning. In
particular, we develop online dynamic provisioning
solutions with and without future workload
information available. We first reveal an elegant
structure of the o_-line dynamic provisioning
problem, which allows us to characterize the
optimal solution in a ―divide-and-conquer‖ manner.
We then exploit this insight to design two online
algorithms with competitive ratios 2 � _ and e= (e
� 1 + _), respectively, where 0 _ _ _ 1 is the
normalized size of a look-ahead window in which
future workload information is available. A
fundamental observation is that future workload
information beyond the full-size look-ahead
window (corresponding to _ = 1) will not
improvedynamic provisioning performance. Our
algorithms are decentralized and easy to implement.
We demonstrate their effectiveness in simulations
using real-world traces.
Mohamed Faten Zhani [12] improved Dynamic
Heterogeneity-Aware Resource Provisioning in the
Cloud Data centers consume tremendous amounts
of energy in terms of power distribution and
cooling. Dynamic capacity provisioning is a
promising approach for reducing energy
consumption by dynamically adjusting the number
of active machines to match resource demands.
However, despite extensive studies of the problem,
existing solutions have not fully considered the
heterogeneity of both workload and machine
hardware found in production environments. In
particular, production data centers oftencomprise
heterogeneous machines with different capacities
and energy consumption characteristics.
Meanwhile, the production cloud workloads
typically consist of diverse applications with
different priorities, performance and resource
requirements. Failure to consider the heterogeneity
of both machines and workloads will lead to both
sub-optimal energy-savings and long scheduling
delays, due toincompatibility between workload
requirements and the resources offered by the
provisioned machines. To address this limitation,
we present Harmony, a Heterogeneity-Aware
dynamic capacity provisioning scheme for cloud
data centers. Specifically, we first use the K-means
clustering algorithm to divide workload into distinct
task classes with similar characteristics in terms of
resource and performance requirements. Then we
present a technique that dynamically adjusting the
number of machines to minimize total energy
24 M. Manikandan, M.Suguna
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 6, Issue 1
January 2017
consumption and scheduling delay. Simulations
using traces from a Google‘s compute cluster
demonstrate Harmony can reduce energy by 28
percent compared to heterogeneity-oblivious
solutions.
3.CONCLUSION
To design, implementation, and evaluation
of a resource management system for cloud
computing services our system multiplexes virtual
to physical resources adaptively based on the
changing demand. Present a system that uses
virtualization technology to allocate data center
resources dynamically based on application
demands and support green computing by
optimizing the number of servers in use. To use the
skewness metric to combine VMs with different
resource characteristics appropriately so that the
capacities of servers are well utilized. Algorithm
achieves both overload avoidance and green
computing for systems with multi resource
constraints. Proposing a new strategy that can be
included in the Cloud-Analyst to have cost effective
results and development and we can conclude from
the results that this strategy is able to do so.
From the work done, concluding that the
simulation process can be improved by modifying
or adding new strategies for traffic routing, load
balancing etc. to make researchers and developers
able to do prediction of real implementation of
cloud, easily. To develop a set of heuristics that
prevent overload in the system effectively while
saving energy used. Trace driven simulation and
experiment results demonstrate that our algorithm
achieves good performance. In the cloud model is
expected to make such practice unnecessary by
offering automatic scale up and down in response to
load variation. It also saves on electricity which
contributes to a significant portion of the
operational expenses in large data centers.
4.REFERENCES [1] R. Zou, V. Kalivarapu, E. Winer, J. Oliver, and S.
Bhattacharya, ―Par- ticle swarm optimization-based
source seeking,‖ IEEE Trans. Autom. Sci. Eng., vol.
12, no. 3, pp. 865–875, Jul. 2015.
[2] J. Bi, H. Yuan, M. Tie, and W. Tan, ―SLA-based
optimization of virtualized resource for multi-tier
web applications in cloud data centre‘s,‖ Enterprise
Inform. Syst., vol. 9, no. 7, pp. 743–767, Nov. 2015.
[3] Linlin Wu, Saurabh Kumar Greg[3] porposed SLA-
Based Resource Provisioning for Hosted Software-
as-a-Service Applications,‖ IEEE Trans. Services
Computer., vol. 7, no. 3, pp. 465–485, Jul. 2014.
[4] Dario Bruneo introduced A Stochastic Model to
Investigate Data Center Performance and QoS in
IaaS Cloud Computing Systems Proc. 32nd IEEE
Int.Conf. Comput. Commun., 2013, pp. 2148–2156.
[5] A. Shahina Banu and W. R. Helen introduced Self-
Adaptive Learning PSO-Based Deadline
Constrained Task Scheduling for Hybrid IaaS Cloud.
EEE Trans. Autom. Sci. Eng., vol. 12,no. 1, pp. 309–
323, Jan. 2014.
[6] Tejas Nitore ,Vishal Rale Ambi Talegaov A Profit
Maximization Scheme with Guaranteed Quality of
Service in Cloud Computing. International Journal
of Computer Applications (0975 – 8887) National
Conference on Advancements in Computer &
Information Technology (NCACIT-2016)
[7] Jianying Luo, Lei Rao, and Xue Liu proposed
Temporal Load Balancing with Service Delay
Guarantees for Data Center Energy Cost
Optimization. ,‖ IEEE Trans. Parallel Distrib. Syst.,
vol. 25, no. 3, pp. 775–784, March 2014.
[8] Haitao Yuan, Jing Bi introduced CAWSAC: Cost-
Aware Workload Scheduling and Admission Control
for Distributed Cloud Data Centers vol. 2, no. 1,
january-march 2014
[9] Wenhong Tian developed A Toolkit for Modeling
and Simulation of Real-time Virtual Machine
Allocation in a Cloud Data Center Resource
scheduling in infrastructure as a service (IaaS).
Future Gener. Comp. Sy., vol. 25, no. 6, pp. 599–
616, 2009.
[10] Weijia Song, Zhen Xiao proposed Adaptive
Resource Provisioning for the Cloud Using Online
Bin Packing Data center. High Performance
Distributed Computing.ACM, 2011, pp. 229–238.
[11] Tan Lu and Minghua Chen had worked on Simple
and Effective Dynamic Provisioning for Power-
Proportional Data Centers Energy consumption
represents a significant cost in data center operation.
IEEE Trans. Parallel Distrib. Syst., vol. 3, no. 3, pp.
775–784, Feb 2014.
[12] Mohamed Faten Zhani improved Dynamic
Heterogeneity-Aware Resource Provisioning in the
Cloud Data centers consume tremendous amounts of
energy in terms of power distribution and cooling.
IEEE transactions on cloud computing, vol. 2, no. 1,
january-march 2014
25 M. Manikandan, M.Suguna
International Journal of Innovations & Advancement in Computer Science
IJIACS
ISSN 2347 – 8616
Volume 6, Issue 1
January 2017
5.ABOUT THE AUTHOR
Mr. M.Manikandan received his diploma in
Computer Science and engineering from Paavai
Institute and B.E Degree in Computer Science and
engineering from Bannari Amman Institute of
Technology, Erode, India. He is currently pursuing
M.E. Degree in Computer Science and Engineering
in Kumaraguru College of Technology,
Coimbatore, India. His areas of interest are Cloud
Computing , Big Data and Web Technology.
M.Suguna is a Asst.Professor(GR-II) in the
Department of Computer Science and Engineering,
Kumaraguru College of Technology, Coimbatore,
India. She received her M.E degree in Computer
Science and Engineering from Govt. College of
Technology in 2005. She has published several
papers in National / International Journals and
Conferences. Her current research interest includes
Cloud Computing, Software Project management.
She is a life member of Indian Society for Technical
Education.