< PreviousThe edge is fluid, not fixed Edge complexities for secure, remote, roadside and even in-building locations are already challenging for traditional air-cooling of IT equipment. The initial deployment must be effective, or the upgrade costs will set back edge capabilities for years to come. Building edge computing at scale is emerging as a key challenge faced by almost every industry. Edge computing definitions already exist for: Industrial Edge; Consumer Edge; Network Edge; Autonomous Transport Edge; Critical Power and Transport Infrastructure Edge, Security Edge and many others. Each of these has subsets of applications and deployment needs. Each IT network environment will have its own unique requirements. Even within the different sectors mentioned, individual edge data centre networks will vary significantly. Edge is not fixed by sector, use or design. ICT within micro edge data centres will in turn be specified to support a vast array of different workloads and varying criticality levels. However, common to all edge data centre computing is the need to house and protect pre-tested IT server, storage and networking equipment in physical environments which can operate autonomously and reliably for long periods. Power densities are variable at the fluid edge IoT and edge are constantly expanding and with this, the volume of data being generated, monitored and transferred between devices, processors and storage. Clearly, By David Craig CEO Iceotope Th e e d g e i s a f u fuid edge www.networkseuropemagazine.com 30 Traditional IT requirements with edge environments are already creating barriers to efficient and effective long-term deployment.u i d c o n c e p t edge deployments will not be homogenous. There will be installation differences across industries and even within the same organisation. These solutions include, but will not be restricted to, differing power density needs (as dictated by the compute density) per module in a given edge roll-out. More electrical power for data processing inevitably means more heat is generated and needs to be removed and consequently greater cooling capacity is required. Growing evidence indicates that compute, whether at high or low power densities in robust and sealed micro data centre environments, can best be delivered via sealed chassis-level liquid cooling technology. Uniquely, chassis-level liquid cooling provides reliable operation of ICT infrastructure including component protection, guarantees technical performance and enables significant reductions in the number of required service interventions. The variety of edge deployments indicate that solutions will not be homogenous. There will be installation differences across industries and even within the same organisation. High data centre power densities are required for high- performance workloads and data-intensive throughput in sectors such as processing, manufacturing and bio-sciences, as well as communications networks. In fact, with advances in the use of AI software applications, it is hard to imagine a single sector where high-performance edge computing will not be required. fuid edge www.networkseuropemagazine.com 31At the same time, the intensifying lobby for sustainability in data centres in all their formats, will compel industries to invest in low power density but optimise compute within edge data centres. Fluid edge – liquid protection Whether at high or low power densities, the need to protect the ICT equipment in new and harsh environments remains a constant vigil. Air-cooling will not protect IT equipment at the edge. In these non- clean room locations, current air cooling techniques do not segregate delicate electronic circuits and components from gaseous and particulate contaminants or humidity. To meet the economic viability constraints of edge scale-up, the technology street level containers must have a guaranteed solution to contamination, or these solutions may not correspond to component manufacturers’ warranty conditions. At the liquid edge, air-cooling can only offer a limited short-term solution and it brings with it a host of complexities and problems, such as the need for regular intrusive maintenance of fans and filters which will require equipment shutdown. As every IT and M&E engineer knows, equipment fails most often when it is being switched off or switched on. A large majority of downtime occurs through human error, usually associated with maintenance. Even the simple act of servicing components like filters in an air-cooled edge environment introduces a greater risk to the continuity of IT services. A recent ASHRAE TC9.9 report ‘Edge Computing: Considerations for Reliable Operations’ considered servicing and highlighted the additional contamination risk resultig from maintenance access. Today, deployed edge compute infrastructure units require an average of 2.5kW of power. The data processing, storage and networking workloads expected to be handled at the fluid edge will drive up average power densities. These are expected to rise rapidly to 5kW and then probably continuously higher. In some HPC edge environments, designs have already expanded to 7.5kW-10kW. At the fluid edge, IT and infrastructure will need to scale. As more data is processed at the edge, more powerful chips will be installed, requiring more power and generating more heat. Only chassis- based liquid cooling can scale to protect vital ICT equipment while minimising environmental impact. Liquid cooling provides highly effective edge data centre cooling in every location from factories to farms, and in all environmental conditions whether in extreme cold or extreme heat, as well as fluctuating climates. fuid edge www.networkseuropemagazine.com 32The effectiveness of chassis-level liquid cooling solutions is increased because they precisely target ICT hot spots. The protection comes from directing the cooling through precision pipework within sealed units inside the edge data centre. The ICT is not immersed in liquid but protected and cooled at the server chassis level. In the fluid edge, ICT equipment is neither exposed to contaminant-bearing cooling air streams nor does the infrastructure require venting for heat removal, providing a vector for physical intrusion either by accidental or malicious intent. It is, in addition, silent in operation, without fan noise causing noise nuisance in residential locations. Fluid future Power requirements, and by extension cooling technology choices, made today for edge compute data centre infrastructure will determine the success of continuous operations and infrastructure protection. To achieve the long-term benefits and economies of scale of edge requires fresh thinking about the base level infrastructure which will house, power and cool the digital infrastructure in the form of micro data centres on which global companies and their customers will depend. The fluid edge is not fixed. Chassis-level liquid cooling is the most effective technology solution to meet edge compute scale, ICT protection, security, power and density needs. At the fluid edge, there will be as many applications as there are industrial and service processes in existence. The fluid future is already here: Probably the most efficient edge ICT data centre protection and performance is chassis-based liquid cooled. n At the fluid edge, IT and infrastructure will need to scale. fuid edge www.networkseuropemagazine.com 33What is the future of liquid cooling? Data centres continued to flourish in 2020, due to rapid growth in hyper-scale and colocation facilities. New applications in artificial intelligence (AI), machine learning, deep learning, connected devices and the Internet of Things (IoT) also increased demand for computing power. liquid cooling www.networkseuropemagazine.com 34As data centres grow, so do their carbon footprints. The EU Commission recently noted that data centres and telecoms are responsible for a significant environmental footprint, and "can and should become climate neutral by 2030". To work toward meeting this goal, the industry needs to implement innovative cooling technologies. Liquid cooling systems can remove heat up to 1,200 times more effectively than air. This is because liquid can provide cooling directly to the heat source, rather than indirectly via fans or convection systems. Liquid cooling also makes it easier to transport heat, which opens up the possibility of reusing this energy elsewhere. For example, in Switzerland, the heat removed by liquid cooling has been used to warm local swimming pools. Innovation in liquid cooling is advancing in line with the global drive for energy efficiency. For example, Google is using sea-water cooling for its new data centre in Finland. And PlusServer is building a data centre in Strasbourg that uses groundwater at a fixed 10°C to 14 °C as feed-liquid for cooling air in the data centre. Microsoft has also recently declared its experiment in submerged data centres a success, after trialling the system off the Scottish coast. The pre-built unit contained 864 servers and 27.6 petabytes of storage, the first of potentially many sub-sea containers. Liquid cooling heats up According to Forbes, US data centres now use more than 90 billion kilowatt-hours of electricity a year, requiring roughly 34 giant (500 megawatt) coal-powered plants. Global data centres used roughly 416 terawatts of power, or about 3% of the global electricity requirement of the planet last year. That is almost 40% more than the entire UK energy requirement. We expect data centre energy consumption to double every four years. This is because the new digital economy is pushing for an increase in computing power which means that data centre applications now generate much more heat. Whereas traditional rack densities were less than 10Kw, Artificial Intelligence and High-Performance Computing applications require new server and GPU hardware infrastructure. Vendors such as NVidia, AMD and Intel are also developing heat generating, more powerful chips, with a cumulative effect of higher rack densities. This necessitates cooling solutions that are capable of handling in excess of 60kW per rack. Data centres will face a struggle to reduce the extra heat generated by these new demands using traditional air cooling systems. Air-cooled racks, using Computer Room Air Conditioning (CRAC) alone, are no longer up to the job for high-density workloads. The thermal and physical properties of air limit its ability to capture heat. Liquids, or combinations of CRAC and liquid cooling, are necessary to cool data centres with next-generation high-density chip architectures and applications. Liquids are, in order of magnitude, better at capturing heat. For example water, with a heat transfer value of 4179 J/KgK, is much more efficient than air by volume. Liquid can transfer heat away from the most sensitive and critical components of a CPU and GPU. This is likely why Omdia’s Data Centre Thermal Management Report 2020 found that the growth of liquid cooling methods, such as immersion, will double between 2020 and 2024. Cold plate and immersion cooling Liquid cooling can help data centre managers deliver more compute power while reducing the facility’s overall power consumption and improving its power usage effectiveness (PUE). Cold plates are increasing in popularity for liquid cooling. They are mounted directly to the surface of high heat- generating equipment inside the server. This provides a closed-loop of cooling fluid to pump directly to the chip. Multiple cold plate designs exist. These include CPU only cold plates but can extend to designs that capture heat from CPUs, GPUs and memory components within the IT devices. An alternative method is immersion cooling. This requires a server to be completely submerged in a dielectric fluid, eliminating the need for air cooling entirely. Liquid cooling has undoubted benefits but both cold plate and immersion techniques present challenges for data centres. Both methods are a radical departure from traditional thermal management approaches and many data centres have legacy infrastructure that can’t easily switch from air to liquid cooling systems. Moving to liquid cooling Converting to liquid cooling requires a full-scale exercise to map out equipment with the highest heat output. This includes examining the networking, telecommunications, power delivery, payback analysis, trade-off comparison and data storage infrastructure in detail. Other costs to factor in, are labour overheads and the level of conversion required to switch facilities to a liquid-cooled environment. But as sustainability becomes increasingly important, the choice is between rebuilding a data centre from scratch or retrofitting a liquid cooling option. To enable retrofitting, an increasingly popular method is to deploy liquid cooling in an air-cooled data centre, focusing on specific high- density racks. But, each project comes with its own specific requirements. Consulting with experts at the outset prevents project overruns and unexpected costs. A sustainable future We need to be more innovative in the way we use and adopt liquid cooling solutions, especially as data centres come under renewed scrutiny on their energy needs. A recent initiative involved a 5G infrastructure business pumping the water used for liquid cooling into a housing complex. This heated the domestic water supply and radiators - at the same time as cooling their servers. We will also need collaboration between industry and academia. That will enable us to provide competitive and cost-effective solutions. For example, Vertiv works with Center for Energy-Smart Electronic Systems (ES2) partner universities to work on furthering the efficiencies of data centres. This type of partnership can assist in developing compatible materials, to further our understanding of fluid hygiene filtration and containment methods, and optimising controls for liquid cooling systems. The need for data will continue to grow. We can’t put that genie back in the box. It’s how we manage the consumer demand for data, and the energy it requires, that will shape the future of the data centre industry. The aspiration is that liquid cooling innovations will help reduce our carbon footprint and create a more sustainable future. n Nigel Gore Global Offerings High Density and Liquid Cooling Vertiv liquid cooling www.networkseuropemagazine.com 35remote assets www.networkseuropemagazine.com 36Philippe Aretz Channel Sales Director Ovarro With broadcast’s geographically spread assets and multiple processes that all generate massive amounts of data, it is key to be able to capture and interpret it in real-time. Philippe Aretz, Channel Sales Director at Ovarro, explains why the deployment of remote telemetry units (RTUs) and SCADA systems have been essential to support the roll-out of mobile communications and broadcasting mediums, especially those in remote locations. One of the most important demands for telemetry systems in the broadcast sector is to establish a reliable communication network. Safety and efficiency are key priorities too, while organisations have legitimate concerns about security breaches. The benefit of using RTUs in the broadcast and telecoms sector is that it helps to reduce downtime and enables more efficient preventative maintenance scheduling. This is achievable due to the RTU system’s ability to provide accurate, real-time data; allowing asset management teams to make better, more informed decisions. In addition, because RTUs do everything locally, it means if communications break down, they continue to run and act autonomously, while maintaining a historical log, reporting back later. In remote, harsh locations where many masts are located, communication is liable to fail regularly, although RTU’s can manage this. For instance, data acquired by the RTU can be used to support maintenance decisions and to verify that performance obligations are being adhered to. RTUs can be deployed on a wide range of equipment from masts, control panels and security measures through to electrical gear. Once in place, they provide remote monitoring of electrical current, temperature, smoke, power and humidity. The unit relays this information to the cloud where it can be analysed and trended, providing operators with a high degree of predictive maintenance capabilities. Where power supply from the grid is not available, they can incorporate intelligent management of power consumption as well as battery or solar power options. The real value of an RTU is that it can perform autonomous control in real-time and then report to SCADA on its current status. Operators at the SCADA interface can ‘supervise’ the operations by setting new KPIs (Set Points) or updating instructions (open/close this, start/stop that, for example) for RTUs to then act upon and manage locally. These developments in RTU functionality make them particularly suited to the broadcast sector because they offer resilience to the site environment, have an ability to operate with minimal drain on local power resources, while retaining the processing power to perform local control algorithms autonomously. ‘Smart’ telecoms assets Companies in the broadcast sector are demanding more, and ‘smarter’ information and, as a result, RTUs will be deployed in greater numbers. It’s worth bearing in mind that currently most RTUs are only used for operations, although they can support maintenance teams, health and safety initiatives and environmental management. As networks grow and increase in size through more connected devices, managing the greater number of assets will become increasingly difficult through traditional, non-technology methods such as scheduled service engineer visits. And, mergers between companies that span continents and sharing of assets such as masts will require a technological approach because central management will increasingly require remote data acquisition. Set against this is the fact that RTUs are a cost-effective way of collecting large amounts of data, meaning a relatively low investment cost for the benefits they deliver will make them a more attractive proposition on a wider range of assets. With broadcast’s geographically spread assets and multiple process that all generate massive amounts of data, the key to ensuring these improvements is being able to capture and interpret it in real-time. The latest, ruggedised RTU technology focuses specifically on that, helping operators meet their investor and customer commitments. n Checking-in on remote assets remote assets www.networkseuropemagazine.com 37Re-imagine optical networks with a smarter approach optical networks www.networkseuropemagazine.com 38al h The past year has emphasised just how reliant we are on connectivity, from the way we live and work, to our social lives. There’s no selectivity when it comes to the crush of bandwidth demands and network providers are all experiencing unprecedented growth that is showing no signs of slowing down. However, there is not only a need to support growing capacity - but this is also coupled with the need to deliver a consistent and quality end-user experience as on-demand content and IoT devices push traffic levels even higher. Jürgen Hatheier Chief Technology Officer for Europe, Middle East and Africa Ciena optical networks www.networkseuropemagazine.com 39Next >