Tuesday 18 August 2015

1CP_RT_J034 Load balancing model for Cloud Data Center


ABSTRACT:
Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to the federation with other clouds. Performance evaluation of Cloud Computing infrastructures is required to predict and quantify the cost-benefit of a strategy portfolio and the corresponding Quality of Service (QoS) experienced by users. Such analyses are not feasible by simulation or on-the-field experimentation, due to the great number of parameters that have to be investigated. In this paper, we present an analytical model, based on Stochastic Reward Nets (SRNs), that is both scalable to model systems composed of thousands of resources and flexible to represent different policies and cloud-specific strategies. Several performance metrics are defined and evaluated to analyze the behavior of a Cloud data center: utilization, availability, waiting time, and responsiveness. A resiliency analysis is also provided to take into account load bursts. Finally, a general approach is presented that, starting from the concept of system capacity, can help system managers to opportunely set the data center parameters under different working conditions.

 EXISTING SYSTEM:

In order to integrate business requirements and application level needs, in terms of Quality of Service (QoS), cloud service provisioning is regulated by Service Level Agreements (SLAs): contracts between clients and providers that express the price for a service, the QoS levels required during the service provisioning, and the penalties associated with the SLA violations. In such a context, performance evaluation plays a key role allowing system managers to evaluate the effects of different resource management strategies on the data center functioning and to predict the corresponding costs/benefits.
Cloud systems differ from traditional distributed systems. First of all, they are characterized by a very large number of resources that can span different administrative domains. Moreover, the high level of resource abstraction allows to implement particular resource management techniques such as VM multiplexing or VM live migration that, even if transparent to final users, have to be considered in the design of performance models in order to accurately understand the system behavior. Finally, different clouds, belonging to the same or to different organizations, can dynamically join each other to achieve a common goal, usually represented by the optimization of resources utilization. This mechanism, referred to as cloud federation, allows to provide and release resources on demand thus providing elastic capabilities to the whole infrastructure.



DISADVANTAGES OF EXISTING SYSTEM:

·        On-the-field experiments are mainly focused on the offered QoS, they are based on a black box approach that makes difficult to correlate obtained data to the internal resource management strategies implemented by the system provider.
·        Simulation does not allow to conduct comprehensive analyses of the system performance due to the great number of parameters that have to be investigated.

PROPOSED SYSTEM:
In this paper, we present a stochastic model, based on Stochastic Reward Nets (SRNs), that exhibits the above mentioned features allowing to capture the key concepts of an IaaS cloud system. The proposed model is scalable enough to represent systems composed of thousands of resources and it makes possible to represent both physical and virtual resources exploiting cloud specific concepts such as the infrastructure elasticity. With respect to the existing literature, the innovative aspect of the present work is that a generic and comprehensive view of a cloud system is presented. Low level details, such as VM multiplexing, are easily integrated with cloud based actions such as federation, allowing to investigate different mixed strategies. An exhaustive set of performance metrics are defined regarding both the system provider (e.g., utilization) and the final users (e.g., responsiveness).



ADVANTAGES OF PROPOSED SYSTEM:

To provide a fair comparison among different resource management strategies, also taking into account the system elasticity, a performance evaluation approach is described. Such an approach, based on the concept of system capacity, presents a holistic view of a cloud system and it allows system managers to study the better solution with respect to an established goal and to opportunely set the system parameters.


MODULES:
1.     System Queuing
2.     Scheduling Module
3.     VM Placement Module
4.     Federation Module
5.     Arrival Process

MODULES DESCRIPTION:
1.     System Queuing:
Job requests (in terms of VM instantiation requests) are en-queued in the system queue. Such a queue has a finite size Q, once its limit is reached further requests are rejected. The system queue is managed according to a FIFO scheduling policy.
2.     Scheduling Module:
When a resource is available a job is accepted and the corresponding VM is instantiated. We assume that the instantiation time is negligible and that the service time (i.e., the time needed to execute a job) is exponentially distributed with mean1/μ.

3.     VM Placement:
According to the VM multiplexing technique the cloud system can provide a number M of logical resources greater than N. In this case, multiple VMs can be allocated in the same physical machine (PM), e.g., a core in a multicore architecture. Multiple VMs sharing the same PM can incur in a reduction of the performance mainly due to I/O interference between VMs.

4.     Federation Module:
Cloud federation allows the system to use, in particular situations, the resources offered by other public cloud systems through a sharing and paying model. In this way, elastic capabilities can be exploited in order to respond to particular load conditions. Job requests can be redirected to other clouds by transferring the corresponding VM disk images through the network.
5.     Arrival Process:
Finally, we respect to the arrival process we will investigate three different scenarios. In the first one (Constant arrival process) we assume the arrival process be a homogeneous Poisson process with rate λ. However, large scale distributed systems with thousands of users, such as cloud systems, could exhibit self-similarity/long-range dependence with respect to the arrival process. The last scenario (Bursty arrival process) takes into account the presence of a burst whit fixed and short duration and it will be used in order to investigate the system resiliency

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-


ü Processor             -        Pentium –IV

ü Speed                    -        1.1 Ghz
ü RAM                    -        256 MB(min)
ü Hard Disk            -        20 GB
ü Key Board            -        Standard Windows Keyboard
ü Mouse                  -        Two or Three Button Mouse
ü Monitor                -        SVGA


SOFTWARE CONFIGURATION:-


ü Operating System                    : Windows XP
ü Programming Language           : JAVA/J2EE
ü Java Version                           : JDK 1.6 & above.


CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536



1CP_RT_J033 IPTV Portal System


ABSTRACT:

Virtualized cloud-based services can take advantage of statistical multiplexing across applications to yield significant cost savings. However, achieving similar savings with real-time services can be a challenge. In this paper, we seek to lower a provider’s costs for real-time IPTV services through a virtualized IPTV architecture and through intelligent time-shifting of selected services. Using Live TV and Video-on-Demand (VoD) as examples, we show that we can take advantage of the different deadlines associated with each service to effectively multiplex these services. We provide a generalized framework for computing the amount of resources needed to support multiple services, without missing the deadline for any service. We construct the problem as an optimization formulation that uses a generic cost function. We consider multiple forms for the cost function (e.g., maximum, convex and concave functions) reflecting the cost of providing the service. The solution to this formulation gives the number of servers needed at different time instants to support these services. We implement a simple mechanism for time-shifting scheduled jobs in a simulator and study the reduction in server load using real traces from an operational IPTV network. Our results show that we are able to reduce the load by (compared to a possible as predicted by the optimization framework).

EXISTING SYSTEM:
Servers in the VHO serve VoD using unicast, while Live TV is typically multicast from servers using IP Multicast. When users change channels while watching live TV, we need to provide additional functionality so that the channel change takes effect quickly. For each channel change, the user has to join the multicast group associated with the channel, and wait for enough data to be buffered before the video is displayed; this can take some time. As a result, there have been many attempts to support instant channel change by mitigating the user perceived channel switching latency
DISADVANTAGES OF EXISTING SYSTEM:
] More Waiting Time
] More Switching latency
] Not Cost effective
PROPOSED SYSTEM:
We propose a) To use a cloud computing infrastructure with virtualization to handle the combined workload of multiple services flexibly and dynamically, b) To either advance or delay one service when we anticipate a change in the workload of another service, and c) To provide a general optimization framework for computing the amount of resources to support multiple services without missing the deadline for any service.

ADVANTAGES OF PROPOSED SYSTEM:
In this paper, we consider two potential strategies for serving VoD requests. The first strategy is a postponement based strategy. In this strategy, we assume that each chunk for VoD has a deadline seconds after the request for that chunk. This would let the user play the content up to seconds after the request. The second strategy is an advancement based strategy. In this strategy, we assume that requests for all chunks in the VoD content are made when the user requests the content. Since all chunks are requested at the start, the deadline for each chunk is different with the first chunk having deadline of zero, the second chunk having deadline of one and so on. With this request pattern, the server can potentially deliver huge amount of content for the user in the same time instant violating downlink bandwidth constraint. 

MODULES DESCRIPTION:

Optimization Framework


An IPTV service provider is typically involved in delivering multiple real time services, such as Live TV, VoD and in some cases, a network-based DVR service. Each unit of data in a service has a deadline for delivery. For instance, each chunk of video file for VoD need to be serviced by its playback deadline so that the playout buffer at the client does not under-run. In this section, we analyze the amount of resources required when multiple real time services with deadlines are deployed in a cloud infrastructure. There have been multiple efforts in the past to analytically estimate the resource requirements for serving arriving requests which have a delay constraint. These have been studied especially in the context of voice, including delivering VoIP packets, and have generally assumed the arrival process is Poisson.

Impact of Cost Functions on Server Requirements
We investigate linear, convex, and concave functions. With convex functions, the cost increases slowly initially and subsequently grows faster. For concave functions, the cost increases quickly initially and then flattens out, indicating a point of diminishing unit costs (e.g., slab or tiered pricing). Minimizing a convex cost function results in averaging the number of servers (i.e., the tendency is to service requests equally throughout their deadlines so as to smooth out the requirements of the number of servers needed to serve all the requests). Minimizing a concave cost function results in finding the extremal points away from the maximum (as shown in the example below) to reduce cost. This may result in the system holding back the requests until just prior to their deadline and serving them in a burst, to get the benefit of a lower unit cost because of the concave cost function (e.g., slab pricing). The concave optimization problem is thus optimally solved by finding boundary points in the server-capacity region of the solution space.

Linear Cost Function

The linear cost represents the total number of servers used. The minimum number of total servers needed is the total number of incoming requests. The optimal strategy is not unique. Any strategy that serves all the requests while meeting the deadline and using a total number of servers equal to the number of service requests is optimal. One strategy for meeting this cost is to set to serve all requests as they arrive. The optimal cost associated with this cost function does not depend on the deadline assigned to each service class.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-


 Processor             -        Pentium –IV 

Speed                   -        1.1 Ghz 

RAM                    -        256 MB(min)

Hard Disk            -        20 GB

Key Board            -        Standard Windows Keyboard

 Mouse                  -        Two or Three Button Mouse

  Monitor                -        SVGA

SOFTWARE CONFIGURATION:-


Operating System                    : Windows XP
 Programming Language           : JAVA/J2EE.
Java Version                           : JDK 1.6 & above.
 Database                                 : MYSQL


CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com

Phone : +91 97518 00789 / +91 72999 51536

1CP_RT_J032 Intrusion Detection and Prevention in Web Servers


ABSTRACT:
Cloud security is one of most important issues that have attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). DDoS attacks usually involve early stage actions such as multi-step exploitation, low frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multi-phase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph based analytical models and reconfigurable virtual network-based countermeasures. The proposed framework leverages Open Flow network programming APIs to build a monitor and control plane over distributed programmable virtual switches in order to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution.


AIM
            The main aim of this project is to prevent the vulnerable virtual machines from being compromised in the cloud server using multi-phase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE.

SYNOPSIS
          In recent studies have shown that users migrating to the cloud consider security as the most important factor. A recent Cloud Security Alliance (CSA) survey shows that among all security issues, abuse and nefarious use of cloud computing is considered as the top security threat, in which attackers can exploit vulnerabilities in clouds and utilize cloud system resources to deploy attacks. In traditional data centers, where system administrators have full control over the host machines, vulnerabilities can be detected and patched by the system administrator in a centralized manner. However, patching known security holes in cloud data centers, where cloud users usually have the privilege to control software installed on their managed VMs, may not work effectively and can violate the Service Level Agreement (SLA). Furthermore, cloud users can install vulnerable software on their VMs, which essentially contributes to loopholes in cloud security. The challenge is to establish an effective vulnerability/attack detection and response system for accurately identifying attacks and minimizing the impact of security breach to cloud users.
 In a cloud system where the infra-structure is shared by potentially millions of users, abuse and nefarious use of the shared infrastructure benefits attackers to exploit vulnerabilities of the cloud and use its resource to deploy attacks in more efficient ways. Such attacks are more effective in the cloud environment since cloud users usually share computing resources, e.g., being connected through the same switch, sharing with the same data storage and file systems, even with potential attackers.

EXISTING SYSTEM:

Cloud users can install vulnerable software on their VMs, which essentially contributes to loopholes in cloud security. The challenge is to establish an effective vulnerability/attack detection and response system for accurately identifying attacks and minimizing the impact of security breach to cloud users. In a cloud system where the infrastructure is shared by potentially millions of users, abuse and nefarious use of the shared infrastructure benefits attackers to exploit vulnerabilities of the cloud and use its resource to deploy attacks in more efficient ways. Such attacks are more effective in the cloud environment since cloud users usually share computing resources, e.g., being connected through the same switch, sharing with the same data storage and file systems, even with potential attackers. The similar setup for VMs in the cloud, e.g., virtualization techniques, VM OS, installed vulnerable software, networking, etc., attracts attackers to compromise multiple VMs.


DISADVANTAGES OF EXISTING SYSTEM:

1.     No detection and prevention framework in a virtual networking environment.
2.     Not accuracy in the attack detection from attackers.


PROPOSED SYSTEM:
In this article, we propose NICE (Network Intrusion detection and Countermeasure selection in virtual network systems) to establish a defense-in-depth intrusion detection framework. For better attack detection, NICE incorporates attack graph analytical procedures into the intrusion detection processes. We must note that the design of NICE does not intend to improve any of the existing intrusion detection algorithms; indeed, NICE employs a reconfigurable virtual networking approach to detect and counter the attempts to compromise VMs, thus preventing zombie VMs.

ADVANTAGES OF PROPOSED SYSTEM:
The contributions of NICE are presented as follows:

Ø We devise NICE, a new multi-phase distributed network intrusion detection and prevention framework in a virtual networking environment that captures and inspects suspicious cloud traffic without interrupting users’ applications and cloud services.
Ø  NICE incorporates a software switching solution to quarantine and inspect suspicious VMs for further investigation and protection. Through programmable network approaches, NICE can improve the attack detection probability and improve the resiliency to VM exploitation attack without interrupting existing normal cloud services.
Ø  NICE employs a novel attack graph approach for attack detection and prevention by correlating attack behavior and also suggests effective countermeasures.
Ø NICE optimizes the implementation on cloud servers to minimize resource consumption. Our study shows that NICE consumes less computational overhead compared to proxy-based network intrusion detection solutions.

ALGORITHM USED:

Alert Correlation Algorithm
          Countermeasure Selection Algorithm

MODULES:

] Nice-A
] VM Profiling
] Attack Analyzer
] Network Controller


MODULES DESCRIPTION:

Nice-A
          The NICE-A is a Network-based Intrusion Detection System (NIDS) agent installed in each cloud server. It scans the traffic going through the bridges that control all the traffic among VMs and in/out from the physical cloud servers. It will sniff a mirroring port on each virtual bridge in the Open vSwitch. Each bridge forms an isolated subnet in the virtual network and connects to all related VMs. The traffic generated from the VMs on the mirrored software bridge will be mirrored to a specific port on a specific bridge using SPAN, RSPAN, or ERSPAN methods. It’s more efficient to scan the traffic in cloud server since all traffic in the cloud server needs go through it; however our design is independent to the installed VM. The false alarm rate could be reduced through our architecture design.

VM Profiling

          Virtual machines in the cloud can be profiled to get precise information about their state, services running, open ports, etc. One major factor that counts towards a VM profile is its connectivity with other VMs. Also required is the knowledge of services running on a VM so as to verify the authenticity of alerts pertaining to that VM. An attacker can use port scanning program to perform an intense examination of the network to look for open ports on any VM. So information about any open ports on a VM and the history of opened ports plays a significant role in determining how vulnerable the VM is. All these factors combined will form the VM profile. VM profiles are maintained in a database and contain comprehensive information about vulnerabilities, alert and traffic.
Attack Analyzer

          The major functions of NICE system are performed by attack analyzer, which includes procedures such as attack graph construction and update, alert correlation and countermeasure selection. The process of constructing and utilizing the Scenario Attack Graph (SAG) consists of three phases: information gathering, attack graph construction, and potential exploit path analysis. With this information, attack paths can be modeled using SAG. The Attack Analyzer also handles alert correlation and analysis operations. This component has two major functions: (1) constructs Alert Correlation Graph (ACG), (2) provides threat information and appropriate countermeasures to network controller for virtual network reconfiguration. NICE attack graph is constructed based on the following information: Cloud system information, Virtual network topology and configuration information, Vulnerability information

Network Controller

          The network controller is a key component to support the programmable networking capability to realize the virtual network reconfiguration. In NICE, we integrated the control functions for both OVS and OFS into the network controller that allows the cloud system to set security/filtering rules in an integrated and comprehensive manner. The network controller is responsible for collecting network information of current Open Flow network and provides input to the attack analyzer to construct attack graphs. In NICE, the network control also consults with the attack analyzer for the flow access control by setting up the filtering rules on the corresponding OVS and OFS. Network controller is also responsible for applying the countermeasure from attack analyzer. Based on VM Security Index and severity of an alert, countermeasures are selected by NICE and executed by the network controller.


SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-


ü Processor             -        Pentium –IV

ü Speed                   -        1.1 Ghz
ü RAM                    -        256 MB(min)
ü Hard Disk            -        20 GB
ü Key Board            -        Standard Windows Keyboard
ü Mouse                  -        Two or Three Button Mouse
ü Monitor                -        SVGA

SOFTWARE CONFIGURATION:-


Operating System                    : Windows XP
Programming Language           : JAVA/J2EE
 Java Version                           : JDK 1.6 & above.

CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536


1CP_RT_J031 Information Leak Detection and Prevention System


ABSTRACT:

In today’s regulated business environment, the loss of data records carries heavy penalties which can be expressed in terms of lost time, money, customer relations and ultimately lost profits. It may be obvious that a data security product should increase data protection and not decrease it; therefore it is important to detect the information leakage and more important to prevent it. A comprehensive information leak protection system is as vital for anyone to who needs to prevent the data from unauthorized persons.

EXISTING SYSTEM:
  Traditionally, leakage detection is handled by watermarking, e.g., a unique code is embedded in each distributed copy.
  If that copy is later discovered in the hands of an unauthorized party, the leaker can be identified.

 DISADVANTAGES OF EXISTING SYSTEMS:
  Watermarks can be very useful in some cases, but again, involve some modification of the original data. Furthermore, watermarks can sometimes be destroyed if the data recipient is malicious. E.g. A hospital may give patient records to researchers who will devise new treatments.
 There is no proper existing system to detect the information leaking in secured manner.

PROPOSED SYSTEM:

ü Our goal is to detect when the distributor’s sensitive data has been leaked by agents, and if possible to identify the agent that leaked the data.

ü We develop a model for assessing the “guilt” of agents.

ü We also present algorithms for distributing objects to agents, in a way that improves our chances of identifying a leaker.

ü Finally, we also consider the option of adding “fake” objects to the distributed set. Such objects do not correspond to real entities but appear realistic to the agents.
          
 MODULES:

1. Login / Registration
2. Data Distributor:
3. Data Allocation Module:
4. Fake Object Module:
5. Data Leakage protection Module:
6. Finding Guilty Agents Module:

MODULES DESCRIPTION:

1. Login / Registration:
               This is a module mainly designed to provide the authority to a user/agent in order to access the other modules of the project. Here a user/agent can have the accessibility authority after the registration.
2. Data Distributor:

A data Distributor part is developed in this module. A data distributor has given sensitive data to a set of supposedly trusted agents (third parties). Some of the data is leaked and found in an unauthorized place (e.g., on the web or somebody’s laptop). The distributor must assess the likelihood that the leaked data came from one or more agents, as opposed to having been independently gathered by other means.

3. Data Allocation Module:
                            
The main focus of our project is the data allocation problem as how can the distributor “intelligently” give data to agents in order to improve the chances of detecting a guilty agent.


4. Fake Object Module:
                            
Fake objects are objects generated by the distributor in order to increase the chances of detecting agents that leak data. The distributor may be able to add fake objects to the distributed data in order to improve his effectiveness in detecting guilty agents. Our use of fake objects is inspired by the use of “trace” records in mailing lists.

5. Data Leakage protection Module:

In this module, to protect the data leakage, a secret key is sent to the agent who requests for the files. The secret key is sent through the email id of the registered agents. Without the secret key the agent cannot access the file sent by the distributor.

6. Finding Guilty Agents Module:

The Optimization Module is the distributor’s data allocation to agents has one constraint and one objective. The distributor’s constraint is to satisfy agents’ requests, by providing them with the number of objects they request or with all available objects that satisfy their conditions. His objective is to be able to detect an agent who leaks any portion of his data. This module is designed using the agent – guilt model.  Here a count value (also called as fake objects) are incremented for any transfer of data occurrence when agent transfers data. Fake objects are stored in database.


HARDWARE REQUIRED
System                 :         Pentium IV 2.4 GHz
Hard Disk            :         40 GB
Floppy Drive       :         1.44 MB
Monitor                :         15 VGA colour
Mouse                  :         Logitech.
Keyboard             :         110 keys enhanced.
RAM                    :         256 MB

 SOFTWARE REQUIRED:
 Operating System                    : Windows XP
 Programming Language           : J2EE
 Java Version                           : JDK 1.6 & above.
Database                                 : MYSQL

CONTACT US
1ONECRORE PROJECTS
  Door No: 214/215,2nd Floor, 
  No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
  Tamin Nadu, INDIA - 600 026
  Email id: 1croreprojects@gmail.com
  website:1croreprojects.com 
  Phone : +91 97518 00789 / +91 72999 51536