Hidden costs of Virtualization!

Virtualization has proved as one of the cost-saving server technologies to emerge in the last decade. The flexibility to start-up whole servers when needed and to stop them, has in theory meant that general-purpose server hardware can be readily re-allocated from one task to another as per the requirement.

Therefore, there won’t be idle resources wasting money doing nothing, because that particular area has been over-specified. But the theory doesn’t work this way in practice, as there can be hidden costs that the concept obscures. So, in this post, a briefing on some of the hidden costs related to virtualization can be uncovered.

One of the key areas when virtualization has hidden costs comes from the fact that hardware has to be designed to give the best possible performance across a broad range of the most frequently used applications. However, that doesn’t mean that it doesn’t deliver best-in-class performance for any given application compared to hardware that was designed specifically with that task in mind. Therefore, if the network admin is well aware of which apps will need high performance, then there is a good chance that dedicated hardware will prove more cost-effective than full virtualization.

Linked to this point is multi-core scaling, which is another area of virtualization, well capable to create performance bottlenecks. Even when a virtual machine has dedicated use of underlying hardware and isn’t using emulation, it can still result in performance that is a few times slower than the non-virtualized equivalent.

The main reason for this issue to crop up is the intervention of virtual machine manager during synchronization-induced idling in the application itself, host operating system or supporting libraries. It is possible to implement software that reduced the detrimental effect of these idling times via a process of idleness consolidation.

This will ensure that each virtual core is fully utilized via interleaving workloads, allowing unused cores to power down. This will reduce energy consumption costs to the enterprise. However, in practical virtualized systems do not pay back that easily. Their virtual cores will spend a fair bit of time sitting around doing nothing whilst they wait for the next block of work to do.

Now, coming to the purchase of software license, generally they are priced on per-core basis for the host server, even if only some of those cores will actually be tasked with running virtual environments using that particular piece of software. There may be no legal right to ask for fewer licenses than the server capacity, although some vendors do offer a sub-capacity license agreement.

Even in this case, it will probably be necessary to set up a complex dynamic license auditing system that checks how many licenses are in use, and ensures that the number purchased is never exceeded. In either way, extra cost is involved.

Thus, an enterprise will either have to shell out more licenses than it really needs or at least purchase and implement an additional sophisticated system so that the company can prove it is complying with licensing levels. In worst case scenario, where a virtualization server is considerably over-specified for a particular application, for failover reasons or because it runs a number of other virtual tasks, a company may end up paying for many more licenses than it ever actually uses.

Additionally, a network admin should also start tracking costs, when IT services are virtualized. In traditional IT environments, a department or application will have specific hardware, software, and infrastructure allocated to it, and the costs for this will be clearly ascribed.

But in a virtualized environment, the hardware and infrastructure is shared across all departments and applications that use it, and is allocated dynamically as required. Therefore, usage levels will be constantly in flux, so keeping track of how much each department or application is using will not be straightforward. This makes it hard to build solid, data-driven business cases on how utilization might require new capacity initiatives for a particular scenario.

Here, it has to be notified that only some sophisticated integrated cloud based virtual server systems make it possible to allocate costs to various different types of deployment and their utilization levels, or to the underlying server resources used.

This makes it possible to keep a track of how much different usage scenarios are costing relative to others, which will make it possible to equate this with the revenue these activities are generating, so development budget can be allocated accordingly. But this will require extensive work modeling the implications of infrastructure, hardware and software licensing costs for different types of virtual machine, which in itself is the cost.

Where all these hidden costs are becoming a problem, a solution like StoneFly Hyper Converged Unified Storage and Server could be an answer, as it allows to dynamically re-allocate resources from one task to a radically different one. With the use of a virtualized operating system, complete hardware utilization and considerable reduction in power/cooling costs can be observed.

Though, virtualization will continue to have a huge amount to offer for future of computing, in certain circumstances, its hidden costs could potentially outweigh the benefits. But solutions such as StoneFly USS appliance can help in replacing “fixed hardware model” of the past with on-demand resource allocation based on your application needs.

For more technical details call 510.265.1616 or click StoneFly USS Unified Storage and Server Hyper Converged appliance.

This entry was posted in Industry News, virtual servers and tagged , , , . Bookmark the permalink.

Comments are closed.