< PreviousThe Evolution Telecommunications service providers saw the opportunity to virtualise their network functions as software matured and more systems became virtualised. By detaching software from hardware, it allows for the use of low-cost commodity technology and the optimisation of the overall infrastructure. Making network functions virtual spawned a whole new field of knowledge, as did the infrastructure that supports them: Network Function Virtualisation Infrastructure. Virtual networks are an evergreen concept that is constantly being discovered and reinvented. A virtual network system, in essence, allows IT to overlay numerous logical networks on a shared physical network. IT departments may use virtual networks to isolate subsets of endpoints for security reasons or to meet the requirements of specific protocols or applications. SDN was initially envisaged as an open-source solution for increasing enterprise control over the network, both in data centres and on the LAN. The idea was to free network architectures from the clutches of network vendors by making them independent of any single vendor's architecture. The open and cross-platform methods made enough progress to put pressure on vendors to adopt the basic control plane- data model. However, enterprises first used SDN in the WAN, rather than data centres. Since 2015, software- defined WAN has blended SDN concepts into enterprise WAN strategy. While network virtualisation enables organisations to segment different virtual networks within a single physical network, or to connect devices on different physical networks to form a single virtual network, software-defined networking enables a new method of controlling data packet routing through a centralised server. The Future Although cloud-hosted control planes are being implemented in production networks, the industry is only now seeing SDN applied to access networks and programmable pipelines utilised to bring new data plane features. Enterprises have implemented network virtualisation and SD-WAN to varying degrees, although traditional networks outnumber software-defined networks. Expect more adoption as the technology improves and the APIs stabilise, but new use cases may have the most impact on the role SDN finally plays. A large part of the promise of SDN is the potential to offer features that were previously unattainable in traditional networks. However, SDN raises numerous concerns about the future of modern networking and computing technologies. Particularly since it is Open-Source Technology. While organisations require more responsive technology, they will not sacrifice security and control. SD-WAN's appeal is that it cuts expenses, enhances user experience, and boosts connectivity to and from the cloud. SDN is a network architecture approach that allows networks to be intelligently and centrally controlled, utilising software applications. This enables operators to manage the entire network consistently and holistically, regardless of the underlying network technology. TRENDS AND PREDICTIONS www.networkseuropemagazine.com 30TRENDS AND PREDICTIONS www.networkseuropemagazine.com 31The liquid future of data centre cooling With rising demand, and equipment densities, air as a cooling medium is reaching its limits. Developments in hybrid and liquid cooling will allow providers to rise to the challenge sustainably. The data infrastructure industry is facing a number of challenges in today’s digital world. Demand for data services is growing at a phenomenal rate and yet, there has never been a greater pressure, or duty, to deliver those services as efficiently and cleanly as possible. As every area of operation comes under greater scrutiny to meet these demands, one area in particular - cooling - has come into sharp focus. It is an area not only ripe for innovation, but where significant progress has been made that shows a way forward for a greener future. According to some estimates, the number of internet users worldwide has more than doubled since 2010, while internet traffic has increased some 20-fold. Furthermore, as technologies emerge that are predicted to be the foundation of future digital economies, such as streaming, cloud gaming, blockchain, machine learning and virtual reality, demand for digital services will rise in, not only volume, but also sophistication and distribution. Increasingly, the deployment of edge computing, bringing compute power closer to where it is required and where data is generated, will see demand for smaller, quieter, remotely managed infrastructure. This one area alone is expected to grow at a compound annual growth rate (CAGR) of 16% to 2026 to a market of more than $11 billion, according to GlobalData, a research, consulting and events business. This level of development brings significant challenges for energy consumption, efficiency and architecture. The IEA already estimates that data centres and data transmission networks are responsible for nearly 1% of energy-related greenhouse gas (GHG) emissions. While it acknowledges that since 2010, emissions have grown modestly despite rapidly growing demand - through energy efficiency improvements, renewable energy purchases by information and communications technology (ICT) companies and broader decarbonisation of electricity grids - it also warns that to align with the Net Zero by 2050 Scenario, emissions must halve by 2030. This is a significant technical challenge. Firstly, in the last several decades of ICT advancement, Moore’s Law has been an ever-present effect. It states that compute power would more or less double, with costs halving, every two years or so. As transistor densities become more difficult to increase as they get into the single nanometre scale, no less a figure than the CEO of NVidia has asserted that Moore’s law is effectively dead. This means that in the short term, to meet demand, more equipment and infrastructure will have to be deployed, in greater density. Added to this, are the recent developments from both Intel and AMD, where their high- end data centre-aimed processors will work in the 350-400W range, further exacerbating energy demand. All changes will impact upon cooling infrastructure and cost In this scenario of increasing demand, higher densities, larger deployments, and greater individual energy demand, cooling capacity must be ramped up too. Air as a cooling medium was already reaching its limits, being as it is, difficult to manage, imprecise and somewhat chaotic. As rack systems become more demanding, often mixing both CPU- and GPU-based equipment, individual rack demands are approaching or exceeding 30W each. Air-based systems, at large scale, also tend to demand a very high level of water consumption, for which the industry has also received criticism in the current environment. One estimate Markus Gerber, senior business development manager, nVent Schroff TRENDS AND PREDICTIONS www.networkseuropemagazine.com 32equated the water usage of a mid-sized data centre as equivalent to three average-sized hospitals. Liquid cooling technologies have developed as a means to meet the demands of both volume and density needed for tomorrow’s data services. Studies with different liquid cooling techniques have established that they can be anything from 50 to 1,000 times more efficient than air cooling. Liquid cooling takes many forms, but the three primary techniques currently are direct-to-chip, rear door heat exchangers, and immersion cooling. Direct-to-chip (DtC) or direct-to-plate cooling is where a metal plate sits on the chip or component, and allows liquid to circulate within enclosed chambers, carrying heat away. This is a highly effective technique, that is precise and easily controlled. It is often used with specialist applications, such as high-performance compute (HPC) environments. Rear door heat exchangers, as the name suggests, are close-coupled indirect systems that circulate liquid through embedded coils to remove server heat before exhausting into the room. They have the advantage of keeping the entire room at the inlet air temperature, making hot and cold aisle cabinet configurations and air containment designs redundant, as the exhaust air cools to inlet temperature and can recirculate back to the servers. The most efficient units are passive in nature, meaning server fans move only the air necessary. They are currently regarded as limited to 20kW to 32kW of heat removal, though units incorporating supplemental fans can handle higher loads in the 60kW maximum range. Immersion technology employs a dielectric fluid that submerges equipment and carries away heat from direct contact. While for many, liquid immersion cooling immediately conjures up the image of a bath brim full of servers and dielectric, precision liquid immersion cooling operates at rack chassis-level with servers and fluid in a sealed container. This enables operators to immerse standard servers with certain minor modifications such as fan removal, as well as sealed spinning disk drives. Solid-state equipment generally does not require modification. A distinct advantage of the precision liquid cooling approach is that full immersion provides liquid thermal density, absorbing heat for several minutes after a power failure without the need for back-up pumps. Liquid capacity equivalent to 42U of rack space can remove up to 100kW of heat in most climate ranges, using outdoor heat exchanger or condenser water, allowing the employment of free cooling. Cundall’s liquid cooling findings According to a study by engineering consultants Cundall, liquid-cooling technology consistently outperforms conventional air-cooling, in terms of both PUE and water usage effectiveness (WUE). This, says the report, is principally due to the much higher operating temperature of the facility water system (FWS), compared to the cooling mediums used for the air-cooled solutions. In all air-cooled cases, considerable energy and water is consumed to arrive at a supply air condition that falls within the required thermal envelope. The need for this is avoided with liquid-cooling, it states. Even in tropical climates, the operating temperature of the FWS is high enough for the hybrid coolers to operate in economiser free cooling mode for much of the time, and under peak ambient conditions, sufficient capacity can be maintained by reverting to ‘wet’ evaporative cooling mode. A further consistent benefit, the report adds, is the reduction in rack-count and data hall area that can be achieved through higher rack power density. There were consistent benefits found, in terms of energy efficiency and consumption, water usage, and space reduction, in multiple liquid cooling scenarios, from hybrid to full immersion, as well as OpEx and CapEx benefits. In hyperscale, co-location, and edge computing scenarios, Cundall found the total cost of cooling information technology equipment (ITE) per kW consumed in liquid versus the base case of current air cooling technology, was 13-21% less. In terms of emissions, Cundall states PUE and Total Power Usage Effectiveness (TUE) are lower for the liquid-cooling options in all tested scenarios. Expressing the reduction in terms of kg CO2 per kW of ITE power per year, results saw more than 6% for co-location, rising to almost 40% for edge computing scenarios. What does the immediate future hold in terms of liquid cooling? Combinations of liquid and air cooling techniques, in hybrid implementations, will be vital in providing a transition, especially for legacy instances, to the kind of efficiency and emission-conscious cooling needs of current and future facilities. Though immersion techniques offer the greatest effect, hybrid cooling offers an improvement over air alone, with OpEx, performance and management advantages. Even as the data infrastructure industry institutes initiatives to better understand, manage and report sustainability efforts, such as the Climate Neutral Data Centre Pact, the Open Compute Project, and 24/7 Carbon- free Energy Compact, more can, and must be done to make every aspect of implementation and operation sustainable. Developments in liquid-cooling technologies are a significant step forward that will enable operators and service providers to meet demand, while ensuring that sustainability obligations and goals can be met. Initially hybrid solutions will facilitate legacy operators to make the transition to more efficient and effective systems, while more advanced technologies will ensure new facilities are more efficient, even as capacity is built out to meet rising demand. By working collaboratively with the broad spectrum of vendors and service providers, cooling technology providers can ensure that requirements are met, enabling the digital economy to develop to the benefit of all, while contributing TRENDS AND PREDICTIONS www.networkseuropemagazine.com 33Tackling TSA compliance through privileged access CYBER SECURITY www.networkseuropemagazine.com 34Telecommunications companies are increasingly being targeted by threat actors carrying out complex and sophisticated cyberattacks. Recent events in Australia showcase the extent of the problem – between September and December 2022, three of the country’s leading telcos suffered a series of major breaches that made international headlines. Optus was the first of these, having been subjected to a major data breach that saw as many as 10 million of its customers’ accounts exposed. Two weeks later in October 2022, Telstra was hit by a small data breach, the company having revealed that some information dating back to 2017 was exposed after one of its third–party suppliers was hacked. Come December 2022, Telstra had suffered another incident in which an internal IT error caused a data leak that affected approximately 130,000 of its customers. And in the very same month, TPG Telecom Ltd became the latest Australian telco to fall victim, announcing that emails of up to 15,000 of its corporate customers had been accessed. Of course, such incidents aren’t unique to Australia. In fact, recent reports suggest that six breaches occurring between 5 January and 1 February 2023 have resulted in the data of more than 74 million customers being leaked, with AT&T, T-Mobile, US Cellular and Verizon all targeted. Unfortunately, the telecommunications sector globally has long been a high-value target for threat actors. Research shows that the average communications organisation suffered as many as 1,380 attacks every week in 2022 – an increase of 27% compared with 2021. This makes telecommunications the fourth most targeted sector, behind education/ research, government/military, and healthcare. Mark Warren, Product Specialist, Osirium CYBER SECURITY www.networkseuropemagazine.com 35The theme here is concerning: that those sectors responsible for the success of critical national infrastructure are coming under fire more than any other sector. Indeed, telecoms providers are typically responsible for the control and operation of critical national infrastructure (CNI) that countries and their inhabitants are heavily reliant upon, such as energy, information technology and transportation systems. Should they be attacked successfully, the effects can be both severe and extensive. The threat against networks has only become more severe since the Russian invasion of Ukraine. A report from the UK’s National Cyber Security Centre (NCSC) expresses that Russia was behind an operation targeting commercial communications company Viasat in Ukraine on 24 February last year, an attack which began approximately one hour before Russia launched its land invasion. Further, the idea that the telecommunications sector has been a key battleground during the war is affirmed by a report published by the United Nations’ International Telecommunication Union (ITU), which states that Russia caused at least $1.79 billion worth of damage to Ukraine's telecoms infrastructure. What are the key cyber threats facing the telecommunications sector? While Russia’s invasion has to an extent relied upon physical attacks on telecommunications networks, telcos across the world are faced with combatting a variety of sophisticated and growing attacks of the digital variety. There is a breadth of risks that sector players are having to manage and mitigate at present, including insider threats, the incident impacting Telstra in December in which 130,000 customer records were accidentally leaked being a prime example. Critically, these can either be unintentional in which users are aware of the potential risks associated with their actions, or maliciously carried out by someone within an organisation. Equally, supply chain attacks are becoming an increasing threat, as was proven by the Telstra breach in September 2022 where company data was exposed after an external supplier was hacked. Indeed, the telecom sector is highly connected, typically dealing with external web hosting providers, data management service providers and a variety of other external partners. This can become an issue if any one of these vendors have a weak security posture – it just takes one weak link in the supply chain to cause severe damage. That said, there are many other potential vulnerabilities which can be exploited. Where uninterrupted service is paramount in the telecommunications sector, DDoS attacks are often used in an effort to disrupt and shutdown provider operations, impacting millions of customers and leading to financial losses. Further, DNS attacks are also common – research in fact shows that in 2019, prior to the pandemic, 83% of telecom firms had experienced a DNS attack. Critically, many of the world’s most malicious cybercriminal outfits have also made telecommunications their primary market of attack. Threat group LAPUS$ - renowned for carrying out data breaches and then demanding ransom payments – repeatedly attacked T-Mobile until as recently as March 2022, for example. Further, LightBasin – a hacker group that has been active since 2016 – has attacked 13 global telecom companies previously, looking to gain access to subscriber information and call metadata in each instance. Effective privileged access management is a vital part of TSA compliance In an effort to enhance the security and reliance of industry players and infrastructure and prevent telecommunications firms from falling victim to the frequent and varied attack methods used by threat actors, the UK government introduced the Telecommunications (Security) Act 2021 (TSA), this coming into force on 1 October 2022. The original proposal reads: “The increased reliance of our economy, society and critical national infrastructure (CNI) on telecoms infrastructure means we need to have confidence in its security. Without that confidence, the disruptive impact of successful cyber attacks by threat actors will continue to grow and the consequences of connectivity compromises or outages could be catastrophic.” Enforced by the Office of Communications (Ofcom), TSA enables the government to implement key regulations and best practice recommendations. If UK telecommunications firms fail to comply with these regulations, they can face penalties amounting to as much as 10% of turnover, or £100,000 per day. Having been developed with the guidance of the National Cyber Security Centre (NCSC), the regulations state that telcos must be capable of identifying any potential risks of security compromises and take measures to reduce these risks, while also consistently reviewing existing processes to prepare for any evolving threats. Within the more detailed list of requirements, TSA highlights the management of privileged access to services and devices that are components of CNI as being of critical importance to compliance. We’ve all encountered privileged access, this typically coming in the form of powerful administrator accounts that are responsible for managing critical systems, services, applications and devices. These accounts serve a key operational purpose, responsible for managing critical systems, services, applications and devices and providing users with the necessary rights and privileges to access specific resources that are vital to their ability to complete work- related tasks. The security concern, however, stems from the fact that these administrator accounts are able to make significant changes to systems. Not only can they control the ability of staff, external partners and even CYBER SECURITY www.networkseuropemagazine.com 36customers to complete their work effectively, but they can also access and alter sensitive data such as valuable intellectual property or personally identifiable information (PII). This makes administrator accounts an incredibly enticing target for threat actors. They are the keys to the IT kingdom of any organisation, and in the hands of a nefarious actor can be used to change permissions, create backdoor accounts, or tweak and delete business-critical data. If a threat actor gained access to a telecommunications provider’s most sensitive management systems, they could deny access to legitimate users of such systems or limit services provided to end users, causing serious disruption to our lives, and worse. Specifically, the NCSC has highlighted four potential negative consequences of attacks on telecommunications networks: 1. Disruption of networks: impacting the operation of services or equipment within the UK’s telecoms networks. 2. Network espionage: the malicious acquisition, modification or use of data within the UK’s telecoms networks. 3. Network pre-positioning: attackers gaining administrative access or presence within the UK’s telecoms networks to enable future exploitations. 4. National-scale supplier dependence: dependence on an external service for the effective operation of the UK’s telecoms services. Three solutions to improve privileged access management for telcos Fortunately, this is by no means a lost cause. Indeed, there are several solutions that telecommunications providers can leverage in order to mitigate the use and abuse of privileged accounts. Here, we outline three that organisations looking to address this key vulnerability should consider: 1. Privileged Access Management (PAM) First, Privileged Access Management (PAM) can play a key role as a defence mechanism for critical back-end systems and databases. It goes beyond Identity Access Management (IAM) which focuses on proving the identity of the user by adding in additional policies to determine which systems each user can access, and with what privilege level. With PAM, the aim is to make sure that any individual accessing a system has the lowest level of privilege they need to still complete their job effectively. In this sense, it is a vital component of successful zero-trust models, providing an effective means of upholding the principle of least privilege. 2. Privileged Process Automation (PPA) Of course, manually creating users and managing privileges can be a time-consuming process in organisations with high headcounts, which is typically the case with telecommunications firms. If access control teams are left to manage these extensive workloads without technological support, mistakes can be made which may result in either too much access being provisioned to the wrong group, or not enough access being provisioned to ensure that employees can do their work effectively. To reduce this burden and cut down on errors, Privileged Process Automation (PPA) can be used – a secure and flexible framework for automating the management of access rights. PPA can be connected with central HR systems, for example, so that when a new starter is added, the necessary user accounts and appropriate access rights are provisioned automatically. 3. Privileged Endpoint Management (PEM) While reducing the number of administrator accounts is incredibly important in limiting a firm’s exposure to threats, certain user groups will still require privileged access to undertake critical work-related tasks. In organisations where administrator rights have been removed from all endpoints, IT teams can face an overwhelming number of requests to make configuration changes such as the installation of software that a user requires to complete their work. Here, Privileged Endpoint Management (PEM) can be leveraged to allow organisations to remove administrator rights from users while also escalating privileges for specific processes where necessary. Policies enable IT to fine-tune exactly which applications to either allow or deny privileged execution for specific AD users and groups, ensuring only verified applications are used and a full audit trail of escalations is maintained. Reap the rewards of proactive compliance By leveraging the right combination of expertise and technologies, enterprises can become empowered to mitigate against those threats stemming from privileged access accounts, without placing unmanageable burdens on access control teams. Not only can organisations ensure all users are provided with just the right level of access and permissions needed to complete work-related tasks (PAM), but they can also reduce burdens on access control teams, while eliminating errors by automating repetitive tasks, such as updating permissions for company leavers and new starters (PPA). With TSA, telecommunications firms need to ensure that those critical functions required to ensure that networks and services are operated effectively, with data properly secured, are in place before March 2024. By embracing such solutions, they will be taking a significant step in swiftly and effectively getting ahead of this compliance deadline. It is also likely that TSA will continue to evolve as the UK government seeks to ensure that telcos adapt their security best practices in response to changing threats. Some experts feel that future iterations may see requirements for programmatic updates to network infrastructure, for example. By working alongside external parties capable of provisioning vital security technologies, such as those focused on privileged access management, telcos will secure peace of mind that they will be well placed to navigate any future changes. CYBER SECURITY www.networkseuropemagazine.com 37CYBER SECURITY www.networkseuropemagazine.com 38The rise of cybersecurity and managed services alongside the emergence of SME as a security market hotbed has, as some might claim, played into the Channel’s hands. But the threats faced by SMEs are still a wild jungle of dangers that many Channel businesses are trying to navigate safely, without exposing themselves to unnecessary risk. The changing nature of cybersecurity threats, the unique challenges that SMEs face, and a growing skills gap across technology and cybersecurity, has led to confusion over what security assurances Channel businesses and resellers should offer the end customers, without overpromising their quality. Here’s a deeper look into what needs to be done from all sides, to ensure the Channel can continue to mitigate the ever-looming threat of an advanced, often crippling, cyber-attack. How can the Channel tackle the cybersecurity skills gap? There is a clear skills gap that exists in the Channel when it comes to tackling cyber threats against SMEs. Channel businesses should be investing in their workforce and training up those who want to explore a specialism, or career, in cybersecurity. But that doesn’t mean the Channel is restrained by the skills gap. In fact, it’s quite the opposite. Jon Selway, VP of Channel Sales EMEA, Aryaka CYBER SECURITY www.networkseuropemagazine.com 39 The rise of cybersecurity threats and what that means for the ChannelNext >