We often field questions from our clients regarding the risks associated with hypervisor / virtualization technology. Ultimately the technology is still software, and still faces many of the same challenges any commercial software package faces, but there are definitely some areas worth noting.
The following thoughts are by no means a comprehensive overview of all issues, but they should provide the reader with a general foundation for thinking about virtualization-specific risks.
Generally speaking, virtual environments are not that different than physical environments. They require much of the same care and feeding, but that’s the rub; most companies don’t do a good job of managing their physical environments, either. Virtualization can simply make existing issues worse.
For example, if an organization doesn’t have a vulnerability management program that is effective at activities like asset identification, timely patching, maintaining the installed security technologies, change control, and system hardening, than the adoption of virtualization technology usually compounds the problem via increased “server sprawl.” Systems become even easier to deploy which leads to more systems not being properly managed.
We often see these challenges creep up in a few scenarios:
Testing environments – Teams can get the system up and running very quickly using existing hardware. Easy and fast…but also dirty. They often don’t take the time to harden the system or bring it up to current patch levels or install required security software.
Even in the scenarios where templates are used, with major OS vendors like Microsoft and RedHat coming out with security fixes on a monthly basis a template even 2 months old is out of date.
Rapid deployment of “utility” servers – Systems that run back-office services like mail servers, print servers, file servers, DNS servers, etc. Often times nobody really does much custom work on them and because they can no longer be physically seen or “tripped over” in the data center they sometimes fly under the radar.
Development environments – We often see virtualization technology making inroads into companies with developers that need to spin-up and spin-down environments quickly to save time and money. The same challenges apply; if the systems aren’t maintained (and they often aren’t – developers aren’t usually known for their attention to system administration tasks) they present great targets for the would-be attacker. Even worse if the developers use sensitive data for testing purposes. If properly isolated, there is less risk from what we’ve described above but that isolation has to be pretty well enforced and monitoring to really mitigate these risks.
There are also risks associated with vulnerabilities in the technology itself. The often feared “guest break out” scenario where a virtual machine or “guest” is able to “break out” of it’s jail and take over the host (and therefore, access data in any of the other guests) is a common one, although we haven’t heard of any real-world exploitations of these defects…yet. (Although the vulnerabilities are starting to become better understood)
There are also concerns about the hopping between security “zones” when it comes to compliance or data segregation requirement. For example, typically a physical environment has a firewall and other security controls between a webserver and a database server. In a virtual environment, if they are sharing the same host hardware, you typically can not put a firewall or intrusion detection device or data leakage control between them. This could violate control mandates found in standards such as PCI in a credit card environment.
Even assuming there are no vulnerabilities in the hypervisor technology that allow for evil network games between hosts, when you house two virtual machines/guests on the same hypervisor/host you often lose the visibility of the network traffic between them. So if your security relies on restricting or monitoring at the network level, you no longer have that ability. Some vendors are working on solutions to resolve intra-host communication security but it’s not mature by any means.
Finally, the “many eggs in one basket” concern is still a factor; when you have 10, 20, 40 or more guest machines on a single piece of hardware that’s a lot of potential systems going down should there be a problem. While the virtualization software vendors will certainly offer high availability scenarios with technology such as VMware’s “VMotion”, redundant hardware, the use of SANs, etc., the cost and complexity adds up fairly fast. (And as we have seen from some rather nasty SAN failures the past two months, SANs aren’t always as failsafe as we have been lead to believe. You still have backups right?)
While in some situations the benefits of virtualization technology far outweigh the risks, there are certainly situations where existing non-virtualized architectures are better. The trick is finding that line in the midst of the hell mell towards virtualization.