While IT journalists debate whether or not 2008 will indeed be “the year of virtualization,” an October 2007 report from Gartner named “Virtualization 2.0” one of the top ten strategic technologies for 2008. There is no doubt that recent years have seen a steady increase in enterprises adopting server virtualization. Yet this rush to stay on the cutting edge of technology can cause organizations to undertake enormous company-wide virtualization initiatives without any idea where to start.
It’s no wonder organizations have taken such a keen interest in virtualization, considering it holds the promise of addressing three of the most top-of-mind concerns for enterprise IT departments today:
- Cost reduction: The ability to host multiple virtual machines (VMs) on one physical server can dramatically reduce the amount of hardware required to support applications and services, decreasing the cost of infrastructure — as well as power and cooling.
- Improved performance against SLAs (software license agreements): Virtualization gives IT more flexibility to adapt and respond to changing business demands — creating more service availability when needed.
- Risk reduction: The “gold standard” VM is incredibly easy to deploy, making standardization and real compliance across the enterprise very easy — in theory.
Solving Problems or Making Them Worse?
The great irony of virtualization that many companies eventually come to realize is that virtualization, when improperly managed, can severely exacerbate the problems it was intended to correct. For instance, while the cost of hardware decreases, the cost of employing staff with the additional skill sets required to manage a virtualized environment increases.
Along with the flexibility of VM deployment comes an added complexity — a brand new layer of dependencies to manage. Without a thorough understanding of the dependencies between business applications and the underlying infrastructure, you can’t effectively monitor and track a virtualization program.
Finally, the ease of deploying standardized VMs goes hand-in-hand with the ease of circumventing controls and deploying highly customized VMs that introduce risk by going unaccounted for in a system of record.
With these contradictions and hazards in mind, there are five basic steps that should serve as the framework for deploying virtualization technologies.
1. Understand What You Have
Before you can put a plan in place for the virtualization process, it is crucial to obtain an accurate and up-to-date picture of your data center assets, the business applications attached to each and the dependencies between them.
Too often this information is manually gathered — an incredibly time consuming process — meaning that it is often out of date, inaccurate or largely irrelevant almost immediately. Even an apparently minor data point such as the number of servers reported in a location will often be off by more than 20 percent from the actual figure unless information is collected automatically — and the problem only intensifies when it comes to more complex types of information.
Discovery and dependency mapping can and should be automated, and the data collected must be very close to 100 percent accurate so it can serve you for the project. It is very dangerous to rely on data that you believe is accurate but have no way of verifying, particularly as that data will often only be 60 to 80 percent correct.
2. Prioritize According to Business Need
Now that you’ve arrived at a set of reliable configuration data, you’re ready to begin planning to deploy virtualization technologies to support some services. Odds are you already know what your company’s first priority business service is, and if you’ve successfully completed step one, you recognize which infrastructure components are necessary to support that service.
However, do you know which service is priority No. 2? No. 10? Have you considered the interdependencies between these services? Thorough answers to these questions will help you map out a comprehensive sequence of events for virtualizing parts of the infrastructure one at a time — without inadvertently affecting critical services.
Start at the top and with your key business requirements, move to the components that support each element. If you do it the other way around, the goal posts will have moved by the time you reach the top. It is at this point that you should establish channels of communication about the virtualization project with the owners of each business service. Open communication about your plans and progress will allow them to assess the applicable business risks and safeguard accordingly.
3. Walk Before You Run
Start small. The first service you virtualize should be one you’ve identified as noncritical and not too tightly intertwined with the infrastructure supporting other parts of the business.
After deploying VMs where appropriate to support this service, review your latest inventory and compare it against the plan to verify that the change happened successfully and identify any areas where your data was insufficient to predict the impact of the change. Though errors and outages can take place at this stage, their consequences will be contained and you will be able to use best practices learned from these pilot programs to ensure you successfully migrate the more critical services.
This “crawl, walk, run” approach is a tried and tested guiding principle in leading global investment banks and mature IT shops and is key to mitigating the risks of downtime and expensive outages.
4. Apply Controls to Manage VM Configuration
If you have build guidelines and best practices in place for physical server deployments, these controls should also apply to virtual servers.
However, the very nature of deploying VMs means that processes can be circumvented. You’ll need to implement a system to track the addition or removal of VMs and detect and verify their configuration information as they enter the environment.
5. Implement Controls to Manage VM Sprawl
Continuous oversight is also necessary here. The only way to avoid VM sprawl in the long term is to start with a comprehensive and highly reliable method of detecting exactly when VMs enter the environment, tracking them as they move around the environment and highlighting unauthorized deployments immediately.
This is a step often undertaken only after VM sprawl has become a problem. Proactively implementing systems and tools will dramatically reduce the time-to-value for your virtualization project.
Virtualization can provide tremendous flexibility and agility, but we cannot underestimate the level of effort required. A first-time virtualization initiative requires businesses to undertake higher-level intelligence initiatives than they may be expecting, and it is critical to understand that every virtualization benefit comes with an associated risk. Companies need to approach these initiatives strategically and with supporting automation where necessary to meet these challenges head-on.
Richard Muirhead is CEO and founder of Tideway, a provider of IT automation software.