Though the period of exponentially growing Information Technology budgets is unlikely to return anytime soon, the large enterprise remains as dependent as ever on its data centers to meet its efficiency and growth objectives. Rigorous data center capacity planning and management is critical to serving today's enterprise. At their core, these disciplines are required to insure that the data center can accommodate both the expected and unexpected needs of the business.
The movement toward distributed computing infrastructures has eroded the transparency between business requirements and data center capacity. With systems now spread across multiple physical as well as logical elements all connected by a shared network within a shared data center, it is significantly more difficult to take "vertical slices" through the infrastructure which business-centric capacity planning and management requires.
Capacity Management: The Ideal
It makes intuitive sense that capacity should be managed in business terms, that is by understanding the data center infrastructure's capability to handle key vertical business processes. How many user transactions can I process in a day? How many web hits can I support in an hour? What kind of volume can my call center handle?
These sorts of metrics are the sort that permit capacity planning for both contingencies - temporary spikes in transaction volume - and planned growth. Line of business executives who plan a major customer acquisition initiative need to understand its impact on the data center: do they need to budget for increased data center capacity as part of the effort? Senior management needs to assure regulators that they can handle the impact of sudden shifts in market conditions.
Such metrics are also critical to the planning of any successful merger or acquisition. These transactions almost always result in some level of consolidation of data centers. This consolidation of data centers typically brings significant cost savings to the combined enterprise. It can only happen well if there is a clear understanding in business terms of the capacity baseline and required "headroom" for the processing volume of the two firms now operating as one.
In the days when enterprises were mainframe dependent, utilization could be mapped to particular processes and capacity could be understood in precisely the business terms required. Higher transaction volume meant more MIPS - there was a transparency between the business requirement and data center (infrastructure) impact.
The movement toward distributed computing infrastructures has eroded this transparency. With systems now spread across multiple physical as well as logical elements all connected by a shared network within a shared data center, it is significantly more difficult to take kinds of vertical slices through the infrastructure which business-centric capacity planning and management requires.
Capacity Management in the Data Center
Data center capacity management has grown in importance as enterprises have become increasingly dependent on distributed systems for critical applications and services. Understanding the interdependencies between space, power and cooling in the environment is critical to knowing how many more servers, storage units, or switches a particular data center can take before some form of upgrade. The growing popularity of blade servers and other chassis-based devices - with their unique power and cooling requirements - has challenged traditional physical (data center) capacity management processes and approaches.
Knowing "when to say when" in terms of adding raised floor space or upgrading power or cooling capacity is critical to the overall IT planning cycle. Power and cooling upgrades have a traditional lead time of six to eighteen months; new raised floor space takes an average of twenty-four months to provision. Virtualization solutions which dynamically allocate capacity will not extend to the physical realities of the space and power grid.
It is not surprising, therefore, that many organizations are taking a fresh look at how they plan and manage their data center physical infrastructure. This includes a more formal process of tracking historical trends of space and power utilization and the use of such trends to produce short, medium, and long term forecasts. These forecasts are tuned to take into account specific major projects, line of business growth projects, and business events like mergers and acquisitions, and other consolidations. There is significant return on investment for organizations that do this well. Physical infrastructure is expensive and correctly aligning expenditures for space and power with the needs of the business has direct bottom-line impact.
Organizations looking to get started on the path to optimized physical layer capacity management need to start by properly documenting the current state of their infrastructure. Many enterprises do not have a complete picture of the equipment they have, where it is, and how it is connected to the power and network grids. Without this baseline, data center capacity management is impossible. With the baseline in place, the next step is to take snapshots over time. The trends - rates of growth or consolidation - can then be modeled against business initiatives and events to create a predicative framework for future planning. At this last stage, data center (physical) capacity is associated with business need and executives can make the risk-adjusted decisions that good information makes possible.
For more information on how Aperture can improve data center capacity management, click here.