by Nick Ewing, Managing Director, EfficiencyIT
When deploying IT infrastructure to support an enterprise organisation, there are several factors to consider. All of the decisions are based on the specific needs of the business; its type, the customers it services and the applications it needs to support. For many, the first decision however, is whether to own or outsource?
Do the needs of the business require predominantly commercial applications, which can be delivered via the cloud? Or does the organisation depend so crucially on local IT assets for performance, data sovereignty or application speed, that it is more preferable to keep their resources on premise?
As always, cost, mission criticality and performance are the ultimate factors determining the decision, but if it is decided that control of its own IT assets is essential, then an organisation’s task will become focused on designing and building a data centre tailored to meet these demands. In most cases this leaves one final question. How does one deploy the most resilient, secure and operationally efficient data centre solution, in the most cost-effective way?
Determining the value of your assets
Total Cost of Ownership (TCO) is the cost of an IT installation over its working life, including the cost of acquiring and deploying all critical infrastructure components including Power, UPS, IT and Cooling. Whereas Operating Expenditure (OpEx) is the cost of running, maintaining and servicing the plant throughout its life.
Taking a longer-term and strategic view may be more advisable for an organisation who is looking to extract the maximum value possible from their IT assets. Some businesses may opt to turn them into an investment. However, this means that the organisation may also need to adjust their calculations and speculate at the outset in order to accumulate a better return during operations.
For many enterprise organisations, building a strong case for investing in new and efficient technologies at the design and specification stages can yield tangible benefits to the bottom line over the long run.
Design and build considerations
When deciding to build a data centre on one’s own premises, the first consideration is the real estate that an organisation possesses. Does it have sufficient space on site for its IT needs? Is the space available in a single hall or room, or are there multiple locations across which equipment may be deployed? Is it wise to keep all IT assets in the one place or can they be efficiently dispersed throughout one’s campus?
The answers to these will naturally be informed by the structure of the organisation. For example, a research organisation needing access to High Performance Computing (HPC) clusters to process large volumes of complex, algorithmic data calculations will likely require high-density racks, large volumes of storage and a heavy cooling requirement that would not be needed for more general-purpose servers. Isolating such resources within their own space, alongside the specialist support infrastructure they require might be more cost-effective from the points of view of acquisition and operation.
Choice of cooling solution
Power and cooling account for much of the operating costs of a data centre. However, Life Sciences in particular, are continually evolving and so do its technology requirements. For HPC clusters to keep up with the demands of new GPU-based systems, the owner may, for example, choose to deploy a form of liquid cooling in order to drive efficiency and lower OpEx.
Liquid cooling has fast made a comeback as a way of maintaining optimal operating temperatures with far lower power consumption than air-cooled systems. Whether using fully immersed methods or direct-to-chip options, liquid cooling can reduce operating costs by up to 14%. There is, of course, a penalty to pay in terms of costs to acquire and install such features. However, for HPC racks, the pay back over time is significant.
For general purpose data centres, the choice of cooling system may be affected by the constraints of one’s existing facility. It may be cost-effective to make use of an existing raised floor if available, or if there is sufficient head room in the server room to have such a floor installed. Similarly, hot or cold aisle containment systems, which improve the efficiency of air-cooled installations, should always be considered if there is sufficient space for them.
Power and management systems
The power requirements, especially the uninterruptible power supply (UPS) will be determined by several factors including the criticality of the systems under load, the quality of the existing power supply and of course, the cost. Today, data centre operators may examine the trade-off between traditional UPS systems based on Valve-Regulated Lead-Acid (VRLA) batteries, and Lithium-Ion (Li-ion) powered models.
The latter offers numerous advantages, including a much smaller footprint, longer lifetimes that incur lower service and operating costs, and greater power densities.
What’s also important is to ensure the user right-sizes the UPS system. This may also mean they need to spend more in the initial stages by choosing a modular UPS, but the payback will result in long term dividends, allowing them to save larger costs in terms of energy usage.
Underpinning all of the efforts to optimise the efficiency of a data centre, whatever type of hardware assets are deployed, is the Data Centre Infrastructure Management (DCIM) system. Or more recently, next-generation DCIM systems that offer increased visibility, with remote monitoring and management capabilities via Artificial Intelligence (AI).
With the IoT technology now built into all data centre hardware assets including racks, UPS, IT and cooling systems as standard, there has never been more insight available into the operating status of a facility.
These new systems also provide the user with the ability to manage an entire portfolio of data centre assets inside a single-console, viewing all of the critical systems and components from anywhere, at any time. Newer systems can alert the user to maintenance schedules and leverage Machine Learning to identify performance issues before they become problematic, increasing resiliency. Moreover, they have the ability to collect performance data for analysis and facilitate continuous operational improvements so that efficiency can be maximised and operating costs lowered.
According to Andy Lawerence of the Uptime Institute, “The average power usage effectiveness (PUE) ratio for a data centre in 2020 is 1.58, only marginally better than seven years ago.” However, the metric remains an important aspect of IT operations today, as any improvement in PUE and thereby energy efficiency will have a positive impact on OpEx.
For many Enterprise IT operators, a trade-off has to be made regarding mission criticality, as the more infrastructure an organisation deploys to increase the resiliency of its IT systems, the more impact there will be on energy usage. DCIM, however, remains an essential component of all data centre operations today.
As with any business, the adage that what gets measured gets managed applies particularly to data centre operations. The drive for greater efficiency is motivated by many factors including environmental concerns, running costs, government regulations and indeed the need to reduce operating expenses.
As always, the Total Cost of Ownership, driven ever lower via a greater investment in efficient technologies during the design stage, will inevitably impact positively on an organisation’s financial bottom line, whilst ensuring their critical IT performs resiliently and exactly as expected. That, of course, is as compelling a business case for efficiency, as any.