Archive for February, 2011

Virtualization Driven By Data Center Consolidation

Organizations with large-scale IT infrastructures are facing a double-edged challenge. The financial pressures exerted on IT budgets have been exacerbated by the never-ending increase in demand for storage and compliance requirements, along with the ever-present need to provide resilient business continuity solutions. In short, IT Managers are being asked to do more with less to a greater degree than ever before.

A combination of compute density, the need for improved management efficiencies, energy conservation and information quality has driven the issue of data centre consolidation to the top of every IT Manager’s agenda. Typically organizations have often responded to these challenges by employing server virtualization solutions. Data centre consolidation traditionally focused on the migration of distributed data systems to a shared infrastructure, later advancing to operating systems using server virtualization techniques and software.

Virtualization is an approach by which several applications—sometimes running on different operating systems—run on the same piece of hardware, creating multiple “virtual” servers from a single machine. Software manages the different applications and systems, resulting in an experience for end users that is indistinguishable from having each application on a dedicated machine.

A virtualized environment, like a data center, uses fewer machines, requiring less physical space and less energy for cooling. By avoiding hardware that runs at partial capacity, virtualization provides greater return on IT investments, and a virtualized server environment provides an IT organization with greater flexibility to deploy new applications.

Many enterprises and corporate organization are today going virtual in most of their IT implementations and this is mainly being driven by the new data center management approach: Consolidation of all Data Center resources for a more effective and efficient management. Every organization wants to use less power, less space, and less personnel while aiming at gaining more advantage and value at the same time. Virtualization is seen as the key enabler for organizations to achieve their goals of reducing the cost of running IT infrastructures while improving their levels of availability.

In recent years, server virtualization has evolved from a technology with significant usage in development, training, and test environments to one that also has a viable place in the data center. Space and power limitations in the data center have fueled a large consolidation movement, with server virtualization and clustering at the forefront. While virtualization allows organizations to run multiple unique operating systems on the same physical host simultaneously, it also offers benefits in high availability and system portability. Naturally, the benefits come with tradeoffs. There is little room for error when it comes to managing data center resources. Understanding where each virtualization technology is best suited in the data center allows organizations to realize the benefits of virtualization without falling victim to its weaknesses.

IT organizations combining data center consolidation and server virtualization must understand how the sequence of consolidation operations (i.e., server virtualization before, during, or after data center consolidation) can impact different aspects of the project. With that understanding, an institution can make the best choice for its particular set of circumstances. A successful combination of server virtualization and data center consolidation yields benefits, including a flexible infrastructure, efficient use of IT resources, reduced costs, and a better posture for adoption of cloud-related services. In my opinion, the best path to this “consolidation nirvana” is to perform server virtualization before data center consolidation. At its very core, virtualization offers three key features that can greatly enhance most data center and business continuity strategies:
a) The ability to provide high availability, both local and remote, across a far broader range of service tiers.
b) The abstraction of services (compute, storage, network and application) from the underlying infrastructure, enabling greater levels of flexibility.
c) Through consolidation and the resulting reduction in physical infrastructures, organizations no longer have the capital and operational burden of running expensive DC and DR sites.

The ability to take hundreds of legacy, often poorly protected servers and move them all to a fully clustered system at little additional cost and in no time at all, is also of huge benefit. On top of this, virtualization also allows virtual servers to be backed-up as a complete image. This further reduces the risk to the business, particularly for those services that are no longer supported by the vendor or the internal IT developers are long gone, and there are plenty of these cases around.

In the early days, before many of the new toolsets became available, the ability to replicate many virtual servers from one site to another was great but the recovery process was complex. It involved a significant number of manual processes or a very complex set of scripts that required modification every time a change was made. Today, as greater numbers of automation tools hit the market, DR for instance is becoming a ‘push-the-green-button’ solution requiring fewer and fewer administrators. This level of automation simply wouldn’t be possible without virtualization technologies.

Taking this one step further, the days of having specific DC and DR strategies for unplanned disasters could be a thing of the past as more and more technologies have business continuity solutions built in by default. Cloud storage solutions, based on virtualization technologies, now enable data to be made available any time any place, regardless of where the critical failure happened.

More still, some enterprises may not be in a position to deploy a grid infrastructure. The reasons for this may be one of enterprise size, footprint size, IT policy, outsourcing, lack of budget, or certain certification requirements. In these circumstances it is generally recognized as good practice for applications with non-intensive workloads to use server virtualization in order to maximize consolidation.

However, where maximizing consolidation, availability and agility are of paramount importance, a combination of server virtualization and grid-based solutions are the best way to maximize the benefits of consolidation, availability and agility. Working in tandem, they can ensure enhanced server virtualization, the ability to dynamically scale within and across nodes, and the dynamic resizing of virtual nodes.

The benefits of virtualization in being able to reduce costs for large-scale organizations are undeniable. However, while server virtualization has brought major benefits, it can also introduce potential vulnerabilities. In a physical server environment, loss of a single server has significantly less impact than in the virtual world where, workload dependant, the consolidation ratio of virtual machines running on a single physical server could be in the 10-15x range.

A physical server failure can affect all of the virtual machines and applications running on that piece of hardware. Similarly failure of the virtualization layer itself impacts all running virtual environments. The complexity of this scenario grows as organizations standardize on server virtualization and deploy tier one applications in a virtual server environment. In short virtualization, while hugely effective in what it does, is not enough on its own to provide safeguards against unplanned downtime. Furthermore, while server virtualization can address consolidation at the server level, it can be found wanting at the level of storage, data and applications.

Internet Society

Join the Internet Society through here…

Let’s work together for a better Internet.