1CP_RT_J034 Load balancing model for Cloud Data Center
ABSTRACT:
Cloud data center management is a key
problem due to the numerous and heterogeneous strategies that can be applied,
ranging from the VM placement to the federation with other clouds. Performance
evaluation of Cloud Computing infrastructures is required to predict and
quantify the cost-benefit of a strategy portfolio and the corresponding Quality
of Service (QoS) experienced by users. Such analyses are not feasible by
simulation or on-the-field experimentation, due to the great number of
parameters that have to be investigated. In this paper, we present an
analytical model, based on Stochastic Reward Nets (SRNs), that is both scalable
to model systems composed of thousands of resources and flexible to represent
different policies and cloud-specific strategies. Several performance metrics
are defined and evaluated to analyze the behavior of a Cloud data center:
utilization, availability, waiting time, and responsiveness. A resiliency
analysis is also provided to take into account load bursts. Finally, a general
approach is presented that, starting from the concept of system capacity, can
help system managers to opportunely set the data center parameters under
different working conditions.
EXISTING
SYSTEM:
In order to integrate business
requirements and application level needs, in terms of Quality of Service (QoS),
cloud service provisioning is regulated by Service Level Agreements (SLAs):
contracts between clients and providers that express the price for a service,
the QoS levels required during the service provisioning, and the penalties
associated with the SLA violations. In such a context, performance evaluation
plays a key role allowing system managers to evaluate the effects of different
resource management strategies on the data center functioning and to predict
the corresponding costs/benefits.
Cloud systems differ from traditional
distributed systems. First of all, they are characterized by a very large
number of resources that can span different administrative domains. Moreover,
the high level of resource abstraction allows to implement particular resource
management techniques such as VM multiplexing or VM live migration that, even
if transparent to final users, have to be considered in the design of
performance models in order to accurately understand the system behavior.
Finally, different clouds, belonging to the same or to different organizations,
can dynamically join each other to achieve a common goal, usually represented
by the optimization of resources utilization. This mechanism, referred to as cloud
federation, allows to provide and release resources on demand thus
providing elastic capabilities to the whole infrastructure.
DISADVANTAGES
OF EXISTING SYSTEM:
·
On-the-field experiments are mainly
focused on the offered QoS, they are based on a black box approach that makes
difficult to correlate obtained data to the internal resource management
strategies implemented by the system provider.
·
Simulation does not allow to conduct
comprehensive analyses of the system performance due to the great number of
parameters that have to be investigated.
PROPOSED
SYSTEM:
In this paper, we present a stochastic
model, based on Stochastic Reward Nets (SRNs), that exhibits the above
mentioned features allowing to capture the key concepts of an IaaS cloud
system. The proposed model is scalable enough to represent systems composed of
thousands of resources and it makes possible to represent both physical and
virtual resources exploiting cloud specific concepts such as the infrastructure
elasticity. With respect to the existing literature, the innovative aspect of
the present work is that a generic and comprehensive view of a cloud system is
presented. Low level details, such as VM multiplexing, are easily integrated
with cloud based actions such as federation, allowing to investigate different
mixed strategies. An exhaustive set of performance metrics are defined
regarding both the system provider (e.g., utilization) and the final users
(e.g., responsiveness).
ADVANTAGES
OF PROPOSED SYSTEM:
To provide a fair comparison among
different resource management strategies, also taking into account the system
elasticity, a performance evaluation approach is described. Such an approach,
based on the concept of system capacity, presents a holistic view of a cloud
system and it allows system managers to study the better solution with respect
to an established goal and to opportunely set the system parameters.
MODULES:
1.
System Queuing
2.
Scheduling
Module
3.
VM Placement
Module
4.
Federation
Module
5.
Arrival Process
MODULES
DESCRIPTION:
1.
System Queuing:
Job requests (in terms
of VM instantiation requests) are en-queued in the system queue. Such a queue
has a finite size Q, once its limit is reached further requests are rejected.
The system queue is managed according to a FIFO scheduling policy.
2.
Scheduling Module:
When a resource is
available a job is accepted and the corresponding VM is instantiated. We assume
that the instantiation time is negligible and that the service time (i.e., the
time needed to execute a job) is exponentially distributed with mean1/μ.
3.
VM Placement:
According to the VM
multiplexing technique the cloud system can provide a number M of logical
resources greater than N. In this case, multiple VMs can be allocated in the
same physical machine (PM), e.g., a core in a multicore architecture. Multiple
VMs sharing the same PM can incur in a reduction of the performance mainly due
to I/O interference between VMs.
4.
Federation Module:
Cloud federation allows
the system to use, in particular situations, the resources offered by other
public cloud systems through a sharing and paying model. In this way, elastic
capabilities can be exploited in order to respond to particular load
conditions. Job requests can be redirected to other clouds by transferring the
corresponding VM disk images through the network.
5.
Arrival Process:
Finally, we respect to
the arrival process we will investigate three different scenarios. In the first
one (Constant arrival process) we assume the arrival process be a homogeneous
Poisson process with rate λ. However, large scale distributed systems with
thousands of users, such as cloud systems, could exhibit
self-similarity/long-range dependence with respect to the arrival process. The
last scenario (Bursty arrival process) takes into account the presence of a
burst whit fixed and short duration and it will be used in order to investigate
the system resiliency
SYSTEM CONFIGURATION:-
HARDWARE CONFIGURATION:-
ü Processor - Pentium –IV
ü Speed - 1.1
Ghz
ü RAM - 256
MB(min)
ü Hard Disk -
20 GB
ü Key Board -
Standard Windows Keyboard
ü Mouse - Two
or Three Button Mouse
ü Monitor - SVGA
SOFTWARE CONFIGURATION:-
ü Operating System : Windows XP
ü Programming Language :
JAVA/J2EE
ü Java Version :
JDK 1.6 & above.
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536