Dynamic workflow management — the ability to sense changes in demand and automatically invoke requisite application and server resources to meet new workloads — is quickly becoming a fundamental requirement in the new data center. Besides increasing compute efficiencies, the technology helps overcome the shortage of critical data center resources such as floor space and available cooling and power capacity.
Resource scarcity stems in no small part from a long-standing principle of data center design: Achieving high power densities has always been a primary goal based on the expectation that overall costs should be proportional to floor space. Naturally this led to the construction of relatively modest-sized data centers.
But even those built with generous amounts of overhead have been stretched to the extreme in the face of prevailing trends:
* Business process automation and exploitation of Web technologies has led the average business to grow its total server count by 10% per year over the past decade.
* Data center consolidation and resource centralization are being pursued to reduce operating costs, ease compliance and improve security.
* Power and cooling requirements for servers have steadily risen in response to demand for systems with higher performance.
And on top of everything else, energy prices are increasing approximately 5% per year.
Consequently, many organizations are scrambling to build new data centers and/or to take advantage of progress being made on several fronts, including better measurement and monitoring techniques; improved design principles; and high efficiency networking, cooling and power conversion equipment.
There is little doubt that maximizing efficiency and thoroughly optimizing a data center is best achieved by taking a comprehensive approach. IT management should pursue improvements in everything from governance to cooling systems, power distribution and conversion, geographic location, physical layout and materials of construction, IT equipment and operational management.
The problem, however, is that for some companies the issues of data center cost and capacity limitations are already critical. They simply have nothing left; there's no more space and no more power.
For these shops, taking 12 to 36 months to implement strategic, long-term solutions is not sufficient. They need relatively quick, low-cost fixes that deliver meaningful gains and, ideally, remain applicable for future data centers the organization builds as well. Dynamic workload management has the potential to be such a fix.
The objective of dynamic workload management is further reduction of the top consumer of data center resources: servers. The idea is to alleviate the need to have dedicated hardware for intermittent and infrequent application workloads.
The four elements required for a dynamic workload management solution are: a server virtualization capability, a load monitoring capability, an orchestration capability and a load distribution capability.
It is widely accepted that server virtualization technology can be used by organizations to reduce server count. The ability to host dissimilar workloads on a single physical server enables IT shops to avoid the all-too-common scenario where 80% of servers are operating at relatively low, inefficient rates of utilization, typically 5% to 30%.
Another key facet of virtualization, however, is that it introduces a layer of abstraction between applications, operating systems and the hardware on which they run. In other words, workloads can be run without concern for dependencies the applications may have on various elements of the underlying system, such as BIOS version, drivers and various operating system functions. This is important because without abstraction implementing dynamic workload management would be significantly more complex, or at least restricted.
But dynamic workload management extracts additional gains from traditional server virtualization efforts, beyond the initial degrees of consolidation with which most organizations are familiar. And this is where the other three components come in to play.
On the surface, the role of the load monitoring capability appears straightforward: to track status and utilization levels for servers. In reality, there is much more to it. In particular, the visibility and intelligence must also be sufficient to provide details on impending resource constraints, such as low memory or disk space, and the relationships between specific workloads and servers — which applications are running where.
Next up is the orchestrator. Armed with information about resource constraints, this management application requests the virtualization infrastructure to spin up (or down) additional servers — which can otherwise be kept powered off — and provision them with a specific workload when applicable thresholds are exceeded.
The final element is one that is often overlooked: having an upstream traffic management device on the job. Upon notification that additional servers have been spun-up, this device ensures they are added to the appropriate resource pool, adjusts traffic distribution patterns accordingly, and enforces any other applicable traffic management policies.
It is all the better if you employ a full-featured application delivery controller to fulfill the latter capability. That way you only need a single device to provide both the load monitoring and load distribution capabilities. Server count can be lowered even further by employing the delivery controller's offload features. These significantly reduce the load on downstream servers by caching frequently requested content and unburdening them from compute-intensive tasks such as encryption and session management.
One of the clearest use cases for dynamic workload management pertains to organizations that operate multiple, large Web applications. Conventional practice in this case is to operate extra servers on a per-application basis to address both high availability and peak load requirements. With dynamic workload management, a single pool of extra servers can instead be shared across numerous applications — easily reducing the total number of backup/overflow servers by 50% or more.
Most organizations also have a range of additional intermittent and infrequent workloads, all of which can be served more efficiently using dynamic workload management. And in a data center environment where every inch, BTU, watt and penny counts, the resulting savings are virtually guaranteed to have a big impact.