Nothing Special   »   [go: up one dir, main page]

0% found this document useful (0 votes)
29 views18 pages

2IA CC (1)

cloud security

Uploaded by

Lokesh B S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views18 pages

2IA CC (1)

cloud security

Uploaded by

Lokesh B S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1.

Explain Basic Cloud Security with service model diagram

Three basic cloud security enforcements are expected. First, facility security in data centers demands
on-site security year round. Biometric readers, CCTV (close-circuit TV), motion detection, and man
traps are often deployed. Also, network security demands fault-tolerant external firewalls, intrusion
detection systems (IDSes), and third-party vulnerability assessment. Finally, platform security demands
SSL and data decryption, strict password policies, and system trust certification. Figure 4.31 shows the
mapping of cloud models, where special security measures are deployed at various cloud operating
levels. Servers in the cloud can be physical machines or VMs. User interfaces are applied to request
services. The provisioning tool carves out the systems from the cloud to satisfy the requested service.
A security-aware cloud architecture demands security enforcement. Malware-based attacks such as
network worms, viruses, and DDoS attacks exploit system vulnerabilities. These attacks compromise
system functionality or provide intruders unauthorized access to critical information.

Protection of servers from malicious software attacks such as worms, viruses, and malware

• Protection of hypervisors or VM monitors from software-based attacks and vulnerabilities

• Protection of VMs and monitors from service disruption and DoS attacks

Protection of data and information from theft, corruption, and natural disasters

• Providing authenticated and authorized access to critical data and services

2. Explain Security Challenges in VMs


1. Traditional Network Attacks: Include buffer overflows, DoS attacks, spyware, malware,
rootkits, Trojan horses, and worms.

2. New Cloud-Specific Attacks:

1. Hypervisor malware: Targets virtualization layers.

2. Guest hopping and hijacking: Exploits vulnerabilities to access other VMs.

3. VM rootkits: Gain control over VMs.

4. Man-in-the-middle attacks during VM migrations.

3. Attack Types:
1. Passive attacks: Steal sensitive data or passwords.

2. Active attacks: Manipulate kernel structures, potentially causing severe damage.

• Defense Mechanisms

• Intrusion Detection Systems (IDS):

• NIDS (Network-based IDS): Monitors network traffic.

• HIDS (Host-based IDS): Protects individual systems.

• Program Shepherding: Verifies and controls code execution.

• Security Tools:

• RIO Dynamic Optimization Infrastructure: Improves execution security.

• VMware vSafe and vShield: Tools for virtualization security.

• Intel vPro Technology: Enhances hardware-level protection.

• Other Approaches:

• Hardened operating systems.

Isolated execution and sandboxin

3. Explain About Privacy and Copyright Protection


The user gets a predictable configuration before actual system integration. Yahoo!’s Pipes is a good
example of a lightweight cloud platform. With shared files and data sets, privacy, security, and
copyright data could be compromised in a cloud computing environment. Users desire to work in a
software environment that provides many useful tools to build cloud applications over large data sets.
Google’s platform essentially applies in-house software to protect resources.
The Amazon EC2 applies HMEC and X.509 certificates in securing resources. It is necessary to protect
browser-initiated application software in the cloud environment. Here are several security features
desired in a secure cloud:

• Dynamic web services with full support from secure web technologies

• Established trust between users and providers through SLAs and reputation systems

• Effective user identity management and data-access management

• Single sign-on and single sign-off to reduce security enforcement overhead

• Auditing and copyright compliance through proactive enforcement

• Shifting of control of data operations from the client environment to cloud providers

• Protection of sensitive and regulated information in a shared environment


4. Distributed Defense against DDoS Flooding Attacks and its
protection techniques

DDoS stands for Distributed Denial of Service. It refers to a type of cyberattack where multiple
compromised systems (often part of a botnet) are used to flood a target server, service, or network
with excessive traffic, causing it to become slow, unavailable, or even crash.

This DDoS defense system detects unusual traffic patterns by monitoring traffic changes at routers,
which helps identify the attack before it overwhelms the victim. The system is effective in cloud core
networks and does not require edge network intervention, relying instead on cooperation between
network providers

Data and Software Protection Techniques

Users desire a software environment that provides many useful tools to build cloud applications over
large data sets. In addition to application software for MapReduce, BigTable, EC2, 3S, Hadoop, AWS,
GAE, and WebSphere2, users need some security and privacy protection software for using the cloud.
Such software should offer the following features:

• Special APIs for authenticating users and sending e-mail using commercial accounts

• Fine-grained access control to protect data integrity and deter intruders or hackers

• Shared data sets protected from malicious alteration, deletion, or copyright violation

• Ability to secure the ISP or cloud service provider from invading users’ privacy

• Personal firewalls at user ends to keep shared data sets from Java, JavaScript, and ActiveX applets

• A privacy policy consistent with the cloud service provider’s policy, to protect against identity theft,
spyware, and web bugs

• VPN channels between resource sites to secure transmission of critical data objects

5. Data Coloring and Cloud Watermarking


With shared files and data sets, privacy, security, and copyright information could be compromised in
a cloud computing environment. Users desire to work in a trusted software environment that provides
useful tools to build cloud applications over protected data sets. In the past, watermarking was mainly
used for digital copyright management. As shown in Figure 4.35, the system generates special colors
for each data object. Data coloring means labeling each data object by a unique color. Differently
colored data objects are thus distinguishable. The user identification is also colored to be matched
with the data colors. This color matching process can be applied to implement different trust
management events. Cloud storage provides a process for the generation, embedding, and extraction
of the watermarks in colored objects. Interested readers may refer to the articles by Hwang and Li [36]
for details on the data coloring and matching process. In general, data protection was done by
encryption or decryption which is computationally expensive. The data coloring takes a minimal
number of calculations to color or decolor the data objects. Cryptography and watermarking or
coloring can be used jointly in a cloud environment.

6. Write about Reputation-Guided Protection of Data Centers

• Reputation systems for cloud services help assess trustworthiness and protect against
attacks. They can be either centralized or distributed.
In a centralized system, one authority manages the reputation, while in a distributed system,
multiple centers work together to manage reputation. Centralized systems are easier to set up but
require powerful servers, while distributed systems are more complex but offer better scalability and
reliability in case of failures

• Reputation systems also differ in their scope of evaluation:

• User-oriented systems focus on individual users or agents. Most P2P reputation systems fall
into this category.

• Resource-based systems assess the reputation of an entire resource or service, like cloud
platforms offering products or services. For example, companies like eBay, Google, and
Amazon use centralized reputation systems to manage trust for their services.

7. Explain about Cloud Security: Top concern for cloud users


Security Concerns:

Users worry about unauthorized access, data theft, and loss of control over sensitive data.
Data is more vulnerable in storage than during processing.
Threats include flaws in VMMs, rogue VMs, VMBRs, and insider threats from CSP employees.

Data Lifecycle Concerns:

Uncertainty about whether deleted data is truly removed or recoverable.


Seamless backups by cloud service provider (CSP).CSPs may occur without user knowledge,
posing risks like accidental exposure or loss.

Lack of Standards:

No standardization for interoperability, handling service interruptions, or moving between


CSPs.
Auditing and compliance remain challenging.

Evolving Technology Risks:

Autonomic computing introduces new security risks due to self-organizing and self-repairing
systems.
Multitenancy Risks:

Shared resources lead to cost savings but increase vulnerabilities, especially in SaaS models with
sensitive user data.

Legal and Jurisdictional Issues:

Unclear which country's laws apply to stored or transferred data.


CSPs may outsource data handling, complicating compliance.
CSPs might be legally obligated to share data with law enforcement.

The contract between the user and the CSP should do the following

1. State explicitly the CSP’s obligations to securely handle sensitive information and its
obligation to comply with privacy laws.
2. Spell out CSP liabilities for mishandling sensitive information.
3. Spell out CSP liabilities for data loss.
4. Spell out the rules governing the ownership of the data.
5. Specify the geographical regions where information and backups can be stored

8. Explain about Privacy and privacy impact assessment and need


of PIA tool

The term privacy refers to the right of an individual, a group of individuals, or an organization to keep
information of a personal or proprietary nature from being disclosed to others.

The digital age has confronted legislators with significant challenges related to privacy as new threats
have emerged. For example, personal information voluntarily shared, but stolen from sites granted
access to it or misused, can lead to identity thef
“Consumer-oriented commercial Web sites that collect personal identifying information from or
about consumers online would be required to comply with the four widely accepted fair information
practices:

• 1. Notice. Web sites would be required to provide consumers clear and conspicuous notice
of their information practices, including what information they collect, how they collect it
(e.g., directly or through nonobvious means such as cookies), how they use it, how they
provide Choice, Access, and Security to consumers, whether they disclose the information
collected to other entities, and whether other entities are collecting information through the
site.
• 2. Choice. Web sites would be required to offer consumers choices as to how their personal
identifying information is used beyond the use for which the information was provided (e.g.,
to consummate a transaction).

• 3. Access. Web sites would be required to offer consumers reasonable access to the
information a Web site has collected about them, including a reasonable opportunity to
review information and to correct inaccuracies or delete information.
• 4. Security. Web sites would be required to take reasonable steps to protect the security of
the information they collect from consumers. The Commission recognizes that the
implementation of these practices may vary with the nature of the information collected and
the uses to which it is put, as well as with technological developments.

• Need for PIA Tools:

• PIA tools identify privacy issues in information systems.

• No international PIA standards existed as of 2012, but various countries and organizations
require PIA reports.

• Proactive Privacy Approach:

• Embedding privacy rules during system design is better than making disruptive changes later.

• Example: Assessing the U.K.-U.S. Safe Harbor process for compliance with European data
protection laws.

• Proposed PIA Tool:


• A web-based SaaS tool accepts project details, privacy risks, and stakeholder inputs.

• Generates a PIA report with findings, risk summaries, and considerations for security,
transparency, and data flows.

• Tool Design:

• Uses a knowledge base maintained by experts.

• Users answer a questionnaire, and the tool generates additional questions as needed.

• An expert system evaluates compliance and prioritizes rules to produce the report.

9.Security risks posed by a management OS

Dom0 manages the building of all user domains (DomU), a process consisting of several steps:
1. Allocate memory in the Dom0 address space and load the kernel of the guest operating system
from secondary storage.

2. Allocate memory for the new VM and use foreign mapping17 to load the kernel to the new VM.

3. Set up the initial page tables for the new VM.

4. Release the foreign mapping on the new VM memory, set up the virtual CPU registers, and launch
the new VM.

A malicious Dom0 can play several nasty tricks at the time when it creates a DomU :

• Refuse to carry out the steps necessary to start the new VM, an action that can be considered a
denial-of-service attack.

• Modify the kernel of the guest operating system in ways that will allow a third party to monitor
and control the execution of applications running under the new VM.

• Undermine the integrity of the new VM by setting the wrong page tables and/or setting up
incorrect virtual CPU registers.

• Refuse to release the foreign mapping and access the memory while the new VM is running
10. Xoar : Breaking the monolithic design of the TCB
Xoar is a modified version of Xen that is designed to boost system security [90]. The security
model of Xoar assumes that the system is professionally managed and that privileged access
to the system is granted only to system administrators. The model also assumes that the
administrators have neither financial incentives nor the desire to violate the trust of the user.
The security threats come from a guest VM that could attempt to violate the data integrity or
the confidentiality of another guest VM on the same platform or exploit the code of the
guest. Another source of threats are bugs in the initialization code of the management
virtual machine. Xoar is based on microkernel20 design principles. Xoar modularity makes
exposure to risk explicit and allows guests to configure access to services based on their
needs. Modularity allows the designers of Xoar to reduce the size of the system’s permanent
footprint and increase the level of security of critical components.
The ability to record a secure audit log is another critical function of a hypervisor facilitated
by a modular design. The design goals of Xoar are:
• Maintain the functionality provided by Xen.
• Ensure transparency with existing management and VM interfaces.
• Maintain tight control of privileges; each component should only have the privileges
required by its function.
• Minimize the interfaces of all components to reduce the possibility that a component can
be used by an attacker.
• Eliminate sharing and make sharing explicit whenever it cannot be eliminated to allow
meaningful logging and auditing.
• Reduce the opportunity of an attack targeting a system component by limiting the time
window when the component runs.
The Xoar system has four types of components: permanent, self-destructing, restarted upon
request, and restarted on timer (see Figure 9.4):
1. Permanent components. XenStore-State maintains all information regarding the state of
the system.
2. Components used to boot the system. These components self-destruct before any user
VM is started. Two components discover the hardware configuration of the server, including
the PCI drivers, and then boot the system
• PCIBack. Virtualizes access to PCI bus configuration.
• Bootstrapper. Coordinates booting of the system.
3. Components restarted on each request:
• XenStore-Logic. • Toolstack. Handles VM management requests, e.g., it requests the
Builder to create a new guest VM in response to a user request.
• Builder. Initiates user VMs. 4. Components restarted on a timer. Two components export
physical storage device drivers and the physical network driver to a guest VM:
• Blk-Back. Exports physical storage device drivers using udev21 rules.
• NetBack. Exports the physical network driver. Another component, QEMU, is responsible
for device emulation.

Builder is very small; it consists of only 13,000 lines of code. XenStore is broken into two
components: XenStore-Logic and XenStore-State. Access control checks are done by a small
monitor module in XenStore-State. Guest virtual machines share only the Builder, XenStore-
Logic, and XenStore-State (see Figure 9.5). Users of Xoar are able to only share service VMs
with guest VMs that they control. To do so, they specify a tag on all the devices of their
hosted VMs. Auditing is more secure; whenever a VM is created, deleted, stopped, or
restarted by Xoar, the action is recorded in an append-only database on a different server
accessible via a secure channel. Rebooting provides the means to ensure that a virtual
machine is in a known-good state. To reduce the overhead and the increased start-up time
demanded by a reboot, Xoar uses snapshots instead of rebooting. The service VM snapshots
itself when it is ready to service a request; similarly, snapshots of all components are taken
immediately after their initialization and before they start interacting with other services or
guest VMs. Snapshots are implemented using a copy-on-write mechanism22 to preserve any
page about to be modified
11. Explain about A trusted virtual machine monitor
Now let’s briefly analyze the design of a trusted virtual machine monitor (TVMM) called Terra
. The novel ideas of this design are:
• The TVMM should support not only traditional operating systems, by exporting the hardware
abstraction for open-box platforms, but also the abstractions for closed-box platforms
discussed in Section 9.5. Note that the VM abstraction for a closed-box platform does not allow
the contents of the system to be either manipulated or inspected by the platform owner.
• An application should be allowed to build its software stack based on its needs. Applications
requiring a very high level of security, e.g., financial applications and electronic voting systems,
should run under a very thin OS supporting only the functionality required by the application
and the ability to boot. At the other end of the spectrum are applications demanding low
information assurance23 but a rich set of OS features; such applications need a commodity
operating system.
• Support additional capabilities to enhance system assurance:
• Provide trusted paths from a user to an application. A path allows a human user to determine
with certainty the identity of the VM it is interacting with and, at the same time, allows the
VM to verify the identity of the human user.
• Support attestation, which is the ability of an application running in a closed box to gain trust
from a remote party by cryptographically identifying itself.
• Provide airtight isolation guarantees for the TVMM by denying the platform administrator
root access

12.Parallel Computing and Programming Paradigms


The system issues for running a typical parallel program in either a parallel or a distributed
manner would include the following.
Partitioning This is applicable to both computation and data as follows:
• Computation partitioning This splits a given job or a program into smaller tasks. Partitioning
greatly depends on correctly identifying portions of the job or program that can be performed
concurrently. In other words, upon identifying parallelism in the structure of the program, it
can be divided into parts to be run on different workers. Different parts may process different
data or a copy of the same data.
• Data partitioning This splits the input or intermediate data into smaller pieces. Similarly,
upon identification of parallelism in the input data, it can also be divided into pieces to be
processed on different workers. Data pieces may be processed by different parts of a program
or a copy of the same program.
• Mapping This assigns the either smaller parts of a program or the smaller pieces of data to
underlying resources. This process aims to appropriately assign such parts or pieces to be run
simultaneously on different workers and is usually handled by resource allocators in the
system.
• Synchronization Because different workers may perform different tasks, synchronization and
coordination among workers is necessary so that race conditions are prevented and data
dependency among different workers is properly managed. Multiple accesses to a shared
resource by different workers may raise race conditions, whereas data dependency happens
when a worker needs the processed data of other workers.
• Communication Because data dependency is one of the main reasons for communication
among workers, communication is always triggered when the intermediate data is sent to
workers.
• Scheduling For a job or program, when the number of computation parts (tasks) or data
pieces is more than the number of available workers, a scheduler selects a sequence of tasks
or data pieces to be assigned to the workers. It is worth noting that the resource allocator
performs the actual mapping of the computation or data pieces to workers, while the
scheduler only picks the next part from the queue of unassigned tasks based on a set of rules
called the scheduling policy. For multiple jobs or programs, a scheduler selects a sequence of
jobs or programs to be run on the distributed computing system. In this case, scheduling is
also necessary when system resources are not sufficient to simultaneously run multiple jobs
or programs.

13. Running a Job in Hadoop

Job Submission Each job is submitted from a user node to the JobTracker node that might be situated
in a different node within the cluster through the following procedure:

• A user node asks for a new job ID from the JobTracker and computes input file splits.
• The user node copies some resources, such as the job’s JAR file, configuration file, and
computed input splits, to the JobTracker’s file system.
• The user node submits the job to the JobTracker by calling the submitJob() function.

Task assignment The JobTracker creates one map task for each computed input split by the user node
and assigns the map tasks to the execution slots of the TaskTrackers. The JobTracker considers the
localization of the data when assigning the map tasks to the TaskTrackers. The JobTracker also creates
reduce tasks and assigns them to the TaskTrackers. The number of reduce tasks is predetermined by
the user, and there is no locality consideration in assigning them.

• Task execution The control flow to execute a task (either map or reduce) starts inside the TaskTracker
by copying the job JAR file to its file system. Instructions inside the job JAR file are executed after
launching a Java Virtual Machine (JVM) to run its map or reduce task.
• Task running check A task running check is performed by receiving periodic heartbeat messages to
the JobTracker from the TaskTrackers. Each heartbeat notifies the JobTracker that the sending
TaskTracker is alive, and whether the sending TaskTracker is ready to run a new task

14. Programming the Google App Engine

Google App Engine (GAE) provides a platform for building web applications in Java and Python, with
tools like the Eclipse plug-in and Google Web Toolkit (GWT) for Java developers, and built-in
frameworks like webapp, Django, and CherryPy for Python. GAE features a NoSQL datastore for
schema-less entities (up to 1 MB each), supporting strong consistency, transactions, and optimistic
concurrency. Java developers can use JDO or JPA, while Python offers GQL for queries. Data
transactions can involve multiple entities in "entity groups" stored together for efficiency, and
performance is enhanced by in-memory caching via memcache. GAE also supports large file storage
through the Blobstore (up to 2 GB).
External resource integration includes Secure Data Connection for intranet links, URL Fetch for
HTTP/HTTPS operations, and Google services like Maps and YouTube via the Google Data API. User
authentication can leverage Google Accounts, and the platform provides an Images service for
manipulating image data. Background tasks are managed with cron jobs or task queues, while resource
consumption is controlled through configurable quotas, offering free usage within limits to ensure
budget-friendly operation.

15.Google File System (GFS)


The Google File System (GFS) is a scalable distributed file system designed to store massive amounts
of data for Google’s applications. Built to handle the high failure rates of cheap, unreliable hardware,
GFS optimizes for large files (100 MB or more) and specific I/O patterns, such as large streaming reads
and write-once operations (mainly appends). It uses a 64 MB block size and replication (at least three
copies) for reliability. The system architecture includes a single master node for metadata management
and cluster control, supported by a shadow master for redundancy, while chunk servers handle data
storage and client interactions directly. GFS eliminates data caching and offers a custom API tailored to
Google applications, simplifying the design and avoiding complex distributed algorithms. While the
single master could be a bottleneck, it effectively manages clusters with over 1,000 nodes, focusing on
high throughput over low latency.

17. Programming on Amazon EC2

Amazon was the first company to introduce VMs in application hosting. Customers can rent VMs
instead of physical machines to run their own applications. By using VMs, customers can load any
software of their choice. The elastic feature of such a service is that a customer can create, launch, and
terminate server instances as needed, paying by the hour for active servers. Amazon provides several
types of preinstalled VMs. Instances are often called Amazon Machine Images (AMIs) which are
preconfigured with operating systems based on Linux or Windows, and additional software. Table 6.12
defines three types of AMI. Figure 6.24 shows an execution environment. AMIs are the templates for
instances, which are running VMs. The workflow to create a VM is Create an AMI → Create Key Pair →
Configure Firewall → Launch (6.3) This sequence is supported by public, private, and paid AMIs shown
in Figure 6.24. The AMIs are formed from the virtualized compute, storage, and server resources
shown at the bottom of Figure 6.23.
18. Manjrasoft Aneka Cloud and Appliances

Aneka, developed by Manjrasoft in Melbourne, Australia, is a cloud application platform for rapidly
developing and deploying parallel and distributed applications on private or public clouds, including
Amazon EC2. It supports multiple programming abstractions and runtime environments, enabling
system administrators to monitor and control cloud infrastructure. Built on the Microsoft .NET
framework with Linux support via Mono, Aneka excels in workload distribution by leveraging virtual
and physical machines to meet Quality of Service (QoS) and Service-Level Agreement (SLA)
requirements.

Aneka provides three key capabilities:

1. Build: Offers an SDK with APIs and tools to create applications and establish various runtime
environments, including enterprise/private clouds and hybrid setups using resources like
XenServer and Amazon EC2.

2. Accelerate: Enables rapid application deployment across multiple environments (Windows,


Linux/UNIX), dynamically leasing resources from public clouds like EC2 to meet QoS deadlines
when local resources are insufficient.

3. Manage: Includes management tools (GUI and APIs) for remote and global cloud setup,
monitoring, and maintenance. Aneka supports dynamic provisioning, accounting, and
prioritization based on SLA/QoS requirements.

Aneka supports three programming models:

 Thread programming: Optimizes multicore computing capabilities in a cloud.

 Task programming: Simplifies prototyping and deploying independent task applications.

 MapReduce programming: Facilitates large-scale distributed data processing.

This flexibility and feature set make Aneka a versatile solution for managing and accelerating
applications in diverse cloud environments.
BigTable, Google’s NOSQL System

BigTable is a highly scalable, distributed storage system designed to handle large-scale structured and
semi-structured data. Here's an overview of its core components and design principles:

1. Row Storage and Ordering:

o Data is stored in rows, which are implicitly created upon storing data.

o Rows are lexicographically ordered, ensuring data locality for rows with similar keys.

o Row keys can be selected by clients to optimize data placement.

2. Tablets:

o Large tables are divided into tablets, each containing a contiguous range of rows.

o Each tablet holds around 100 MB to 200 MB of data.

o This design supports fine-grained load balancing and faster recovery. For example:

 A failed machine's tablets are distributed to other machines.

 Each serving machine typically manages ~100 tablets.

3. Master-Worker Architecture:

o A BigTable master:

 Manages metadata.

 Handles load balancing by redistributing tablets from overloaded machines.

o Tablet servers handle actual data storage and retrieval.


o Clients interact with tablet servers using a BigTable client library, with occasional
communication with the master for metadata or configuration updates.

4. Chubby Lock Service:

o BigTable depends on the Chubby distributed lock service for:


 Master election to ensure only one active master.

 Maintaining consistency and location bootstrapping

The BigTable system is built on top of an existing Google cloud infrastructure. BigTable uses the
following building blocks: 1. GFS: stores persistent state 2. Scheduler: schedules jobs involved in
BigTable serving 3. Lock service: master election, location bootstrapping 4. MapReduce: often used to
read/write BigTable data

Chubby, Google’s Distributed Lock Service

Chubby is a coarse-grained locking service used internally by Google, providing reliability and
consistency through the Paxos protocol. It allows for the storage of small files in a file system tree-like
namespace shared across five servers in each Chubby cell. Clients interact with these servers via a
Chubby library to perform file operations, ensuring fault tolerance even during node failures. Chubby
serves as Google’s primary internal name service, playing a crucial role in systems like GFS and BigTable
by facilitating primary election in distributed setups. In comparison, AWS offers auto-scaling and elastic
load balancing to optimize cloud resource management. Auto-scaling dynamically adjusts EC2 instance
capacity based on demand, scaling up during spikes to maintain performance and scaling down during
lulls to reduce costs. Elastic load balancing distributes traffic across multiple instances, ensuring
reliability by bypassing failed nodes and balancing loads. These services are supported by CloudWatch,
which monitors metrics like CPU utilization, disk I/O, and network traffic, providing insights for efficient
resource management and system scalability

EMERGING CLOUD SOFTWARE ENVIRONMENTS

1. Open Source Eucalyptus and Nimbus

The Eucalyptus system is an open-source cloud platform designed for managing virtual
machines (VMs) and storage, inspired by Amazon EC2. It uses Walrus as a storage system
analogous to Amazon S3, enabling users to upload and register VM images, which can be
linked with kernel and ramdisk images and stored in user-defined buckets for retrieval across
availability zones. This facilitates the creation and deployment of specialized virtual
appliances. Eucalyptus is available in both open-source and commercial versions.
Nimbus, another open-source cloud solution, provides Infrastructure-as-a-Service (IaaS)
capabilities, allowing clients to lease remote resources by deploying and configuring VMs. Its
Nimbus Web interface, built using Python Django, supports user and administrative functions.
Nimbus integrates with Cumulus, a storage system compatible with Amazon S3’s REST API,
offering additional features like quota management while supporting tools like boto and
s2cmd. It supports two resource management strategies: the resource pool mode, where
Nimbus controls a pool of VM nodes, and the pilot mode, which relies on a Local Resource
Management System (LRMS) to allocate resources. Additionally, Nimbus implements Amazon
EC2's interface, enabling users to interact with Nimbus clouds using EC2-compatible clients.

You might also like