Cloud Computing and Virtualization

Cloud Computing and Virtualization

The terms “virtualization” and “cloud computing” are often used interchangeably although they mean different things.

Virtualization enables a single computer to host multiple independent virtual computers that share the host computer hardware. Virtualization software separates the actual physical hardware from the virtual machine (VM) instances. VMs have their own operating systems and connect to hardware resources through software running on the host computer. An image of a VM can be saved as a file and then be re-started when required.

It is important to remember that all the VMs share the resources of the host computer. Therefore, the limiting factor on the number of VMs that can run at the same time is directly related to the amount of processing power, memory, and storage.

Cloud computing separates the applications from the hardware. It provides organizations with on-demand delivery of computing services over the network. Service providers such as Amazon Web Services (AWS) own and manage the cloud infrastructure that includes the networking devices, servers, and storage devices and is usually housed in a data center.

Virtualization is the foundation which supports cloud computing. Providers such as AWS offer cloud services using powerful servers that can dynamically provision virtual servers as required.

Without virtualization, cloud computing, as it is most-widely implemented, would not be possible.

Figure of a businessperson holding a tablet with cloud computing icons floating above it.

Traditional Server Deployment

To fully appreciate virtualization, it is first necessary to understand how servers are used in an organization.

Traditionally, organizations delivered applications and services to their users using powerful dedicated servers as shown in the figure. These Windows and Linux servers are high-end computers with large amounts of RAM, powerful processors, and multiple large storage devices. New servers are added if more users or new services are required.

Traditional Server Deployment

Problems with the traditional server deployment approach include:

  • Wasted resources – This occurs when dedicated servers sit idle for long periods waiting until they are needed to deliver their specific service. Meanwhile, these servers waste energy.
  • Single-point of failure – This occurs when a dedicated server fails or goes offline. There are no backup servers to handle the failure.
  • Server sprawl – This occurs when an organization does not have adequate space to physically house underutilized servers. The servers take up more space than is warranted by the services that they provide.

Virtualizing servers to use resources more efficiently addresses these problems.

The figure displays historical operating installation with eight servers: a web server, Email server, SQL server, LAN server, DHCP server, Active Directory server, AAA RADIUS server, and a Network Management server. The first six servers are Windows servers and the last two are Linux servers.

Leave a Reply

Your email address will not be published. Required fields are marked *