< PreviousOrganisations are increasingly looking to SD-WAN deployments for improved performance and reduced cost. In fact, WAN environments are more dynamic and secure with SD-WAN automation. For example, it can provide a direct internet connection from a London headquarters to an office in Manchester, enabling teams to balance between multiple service providers and transport types more easily while making intelligent adjustments to application paths for better performance. Visibility is key One of the primary benefits of SD-WAN, is the ability to combine multiple technologies such as MPLS and business broadband connection from different ISPs. This can add capacity, performance and resiliency to any WAN but does bring with it some complexity as organisations must juggle multiple ISP relationships to procure and manage connectivity. In which scenario is each ISP most effective? Is splitting traffic between them the right move? If so, what’s the best way to determine the allocation? These are just some of the questions that undoubtedly come up. Beyond that, organisations must also manage SLAs, monitor for outages or slowdowns, reroute traffic as needed and more. For example, let’s say that network traffic is split between two ISPs – one for web traffic and the other for all web-hosted productivity apps such as email, CRM and ERP. This works well until one ISP goes down, in which case you’d need to reroute all traffic to the other. That’s when traffic prioritisation issues can cascade into poor connectivity that’ll degrade user experiences and hurt productivity. These types of circumstances are why you must be capable of properly visualising, classifying and prioritising traffic across all ISPs. Demand for Software Defined Wide Area Networks (SD-WAN) is growing at a tremendous rate – boosted by the rise of hybrid working and increased roll-out of FTTC. However, optimising and troubleshooting SD- WAN links requires a bit of a rethink when it comes to tools, skills and priorities. Getting to grips with SD-WAN Jay Botelho Senior Director Product Management LiveAction NETWORKS www.networkseuropemagazine.com 30g to grips SD-WAN NETWORKS www.networkseuropemagazine.com 31Security focus Another concern is cyber security risks. Although SD- WAN links tend to be married with VPN technology, the data flows will often traverse across the public internet which requires that organisations enforce best-practice security controls and processes. As more users are working remotely, access from the public internet and connections from it to hosted services and applications are more exposed to security threats. This path can allow adversaries to avoid most of the security controls IT departments often rely upon, such as firewall rules and any IDS/IPS that has been deployed thus making corporate data protection subject to individual employees' security practices. Even with the growth in home working following the pandemic, most staff lack high-quality IDS/IPS on their home networks, making them more vulnerable to phishing attempts and various malware attacks. In most cases, the lack of close IT control puts corporate data directly in jeopardy. As such, it is essential to deploy some form of endpoint security on each user’s system that can secure user applications and enforce centrally defined security rules to allow for monitoring and security policy enforcement. This endpoint control should be integrated with the network monitoring platform to allow for a truly unified management approach. Policy-based management With the rise of cloud-based applications, connectivity starts to become a critical factor in determining overall application performance. An organisation will struggle to effectively manage application performance without traffic prioritisation, which is virtually impossible to enforce once traffic hits the public internet. With a hub/ spoke architecture, an organisation can contract for a big pipe, and average many users across that pipe to ensure consistent performance, at a reasonable cost per user. But as organisations start to embrace hybrid working with more remote users and locations, it is difficult to manage all these remote Internet connections, and guarantee performance. For example, imagine an employee that needs to transfer massive video, CAD or database files on a regular basis. This could be a 100GB and even when the employee is working at the office, and assuming a 1Gbps Internet connection, transferring a 100GB file could consume the network for over 13 minutes. However, with remote working, most residential networks will rarely have more than 100bps, so it’s easy to see how a single large transfer could bottleneck a poorly managed SD-WAN setup. To counter this, organisations need to set policies within the SD-WAN management engines that make an automated decision based on scenarios such as large file transfers or the priority of a user or task. This can enact upload rate limits for large files - or move traffic from a priority leased line circuit to a lower cost and performance DSL- based connection for non-critical tasks such as social media or viewing content from YouTube. Managing ISPs However, with the benefits of being able to use multiple ISPs, the issue of inconsistent quality can arise. A recent survey found that nearly one in four organisations see inconsistent quality across multiple ISPs as a significant challenge for their business. This is because each ISP uses its own technologies and rolls out updates at its own speed. And IPSs don’t treat all areas equally; they’re focused on servicing the broadest population possible with minimum investment. This can lead to underserved geographies and inadequate quality of service for organisations operating within them. In each city, ISPs may provide more bandwidth to business parks than residential areas and the maximum bandwidth available may depend on the postcode in which a site is located. The maximum available connection speeds and the demand in the neighbourhood can both limit bandwidth. As users, and therefore the network, become increasingly distributed, controlling user experiences will become extremely challenging. This means that gaining metrics around SD-WAN is vital and to this end, Flow-based network analysis can help perform real-time network topology mapping for devices, interfaces, applications, VPNs and users. It can also help establish critical baselines for SD-WAN deployments, such as site-to-site traffic types and paths, application behaviours and consumption patterns. This type of granular insight is essential to get to grips with SD-WAN and enable the concept to deliver to its full potential. IPSs don’t treat all areas equally; they’re focused on servicing the broadest population possible with minimum investment. NETWORKS www.networkseuropemagazine.com 32Make the most of your presence NETWORKS EUROPE magazine is the longest established and industry leading technical journal for the network infrastructure and data centre marketplace. • NETWORKS EUROPE features editorial contributions from worldwide industry figureheads, ensuring that it’s the world’s best publication for information on all aspects of this constantly evolving industry. • Published every other month (x6 per annum), the magazine is produced in digital format, with a magazine viewing link (readable on all major electronic devices) e-mailed directly to subscribers on publication. • The readership consists of 26,000 industry professionals across Europe; with its core circulation covering the UK, Germany, France, Belgium, The Netherlands, Italy and Spain. • The magazine's highly focused editorial content caters exclusively for an informed audience consisting of network infrastructure professionals, including; data centre managers, facilities managers, CIOs, CTOs, ICT directors, consultants and project managers. • Key editorial content areas include; news, legislation and technical information from industry-leading companies and commentators, with detailed case studies, as well as the latest thinking in technology and practices. Advertising Advertising can be in the form of company or product promotion. You can contact our advertising team for details on costs. We accept adverts that are submitted to us in the form of image files saved as high resolution (>300dpi) *.pdf, *.png or *.jpg format files. Sponsored content We publish sponsored or branded content in the form of advertorials, case studies, white papers and product/company features. Our advertising team can help with advice and costs. Contact sales@networkseuropemagazine.com for more details. NETWORKS EUROPE The magazine for network and data centre professionals www.networkseuropemagazine.com 33“Effective data storage management is a critical component of the IoT ecosystem,” says David Keegan, Group CEO of DataQube Global Internet of Things (IoT), from a top-level standpoint, refers to a network of physical devices such as embedded sensors, driverless vehicles, smartphones/tablets, wearables or home appliances, that create and share information without human intervention. Even though there is currently a strong drive toward IoT and digitisation, the concept has been around for the last 10 years at least, with interconnected devices and applications prevalent in industry and consumables. What has recently changed is the augmented capabilities of said devices, faster comms networks, the standardisation of communication protocols and more affordable IT, which is giving the IoT phenomena a turbocharge. As such, it is transforming operational processes and product lifecycles across a range of markets and applications. That said, the detailed level of information current IoT devices are capable of capturing should be empowering manufacturers to leverage the benefits of Industry 4.0 to operate truly automated production lines/assembly lines, but this isn’t happening as quickly as you might expect. While some of the barriers may be cultural or finance-related, a much bigger barrier, in many instances, is highly intelligent devices versus substandard data handling and storage management infrastructure. The IoT Data Deluge in Industry and Manufacturing David Keegan Group CEO DataQube Global DATA STORAGE www.networkseuropemagazine.com 34 An unavoidable consequence of IoT and the devices and applications it powers, is the colossal amount of constantly changing data that is generated as a result. This data needs to be processed in real- time if meaningful conclusions are to be drawn and swift decisions made to avoid bottlenecks and keep production lines operational, as the smallest of delays can have major repercussions further down the line. This is particularly important for manufacturers reliant on artificial intelligence (AI) and machine learning (ML). Both disciplines are data-intensive, bandwidth-hungry and require robust storage management processes that enable parallel processing at scale. Indeed, the value of any IoT derived data is incredibly short-lived, and unless the associated storage management infrastructure can keep pace with the constantly changing data, IoT investment can very quickly become an expensive white elephant. So, what happens to all the IoT data? The data flow in an industrial IoT (IIoT) network, in simple terms, is a three-layer process as follows: 1. Data sources IoT gathers data from an array of devices and/ or embedded sensors and the information can either be processed locally, depending on the availability of appropriate infrastructure, the sensitivity of the data, or the nature of industry, or transported via an edge gateway to a colocation facility or the cloud for processing and handling. 2. Data storage The data captured by the embedded technologies then needs to be appropriately stored for long-term and short-term applications. Some of the data might require immediate processing depending on the application (the operability of an industrial robot for example), whereas some might need to be securely transported and/or stored for future applications. 3. Data analytics & applications This layer analyses the data so useful information can be generated and acted upon to speed up operations and process control. Detailed insights into production lines/product life cycles enable slicker operations through predictive maintenance and other troubleshooting measures, thus avoiding downtimes and outages that can impact efficiency and profitability. Data storage is a small cog in a big IoT wheel Storage is just one element of the IoT data processing ecosystem. BUT it is an element that is becoming increasingly integral as insufficient storage capacity is detrimental to operability. The storage capabilities of any IoT network must assure data integrity, reliability and safety. Moreover, they must be agile to support a range of environments, technologies and applications, while facilitating seamless interconnectivity between edge gateways, other edge devices and the cloud. Substandard storage is the Achilles heel for many manufacturers, with outdated comms rooms not allowing them to harness IoT data to its full potential. Insufficient storage capacity is such an issue that, according to industry research, between 60% and 73% of machine-generated data goes unanalysed. The IoT data that powers Industry 4.0 needs to be processed as close to the source as possible for operability and safety reasons and organisations reliant on mission-critical data are quickly realising that conventional colocation facilities cannot always assure the ultrahigh speed and ultra low latency needed. In a paradox, many on-premises facilities are not fit for purpose as far as IoT data storage management is concerned because they are unable to house the specialist IT needed. And even if they are, there is seldom room for expansion as capacity requirements escalate. High-performance computers (HPC), because of their sheer magnitude, GPU-based processing power, associated cooling technology, and high energy consumption, need specialist facilities that are fireproof, weatherproof, comprise seamless connectivity to the cloud and support dynamic power consumption. Commissioning a bespoke facility robust enough to meet the demands of IoT data is a non-starter for many manufacturers because of the high costs involved – anything between £7-£12 million per MW and lead times in excess of 18 months. What is needed is a viable means of providing centralised data centre capabilities locally without the associated expense of building a bespoke facility that assures HPC processing. This has not been possible thus far, however, due to financial constraints, complex project management requirements and excessive deployment times. However, the IoT data handling quandary in manufacturing is about to be transformed thanks to a disruptive approach to edge date centre infrastructures being championed by DataQube Global. Recognising the need for data handling at source, the company has developed a portfolio of podular data centres for internal and external usage that assure high- speed, high-performance, low latency processing needed for IoT data. Installs are possible from less than 10W to +100MW and individual pods can operate independently as a mini data centre or merged in stacks, depending on the size of a manufacturing facility or the storage capacity needed. Without a cost-effective and viable means of delivering HPC at the edge, IoT data will remain untapped regardless of the accuracy or sophistication of the associated embedded sensors. Edge data centre infrastructures must adapt to meet this changing data processing landscape. y g DATA STORAGE www.networkseuropemagazine.com 35And, as well as market growth, the role of the data centre will significantly expand too – with providers recognising the wealth of opportunities in a market that is predicted to grow to $172.36 billion by 2030, at 14.4% of CAGR between 2020 and 2030. Like many businesses, data centres are looking for new solutions that can meet their ambitious energy efficiency and carbon neutrality targets. So, how do those responsible for data centres prepare for growth while meeting their green commitments? Much of the answer lies in chilled-water cooling systems, which provide a viable way for data centre providers and managers to not only support their growth cost- effectively and with minimal disruption, but also reduce their carbon footprint and help meet sustainability objectives in both the reduction of direct and indirect emissions. Meeting the challenge ahead When it comes to reducing the global warming potential (GWP) of data centre operations, traditional refrigerants are already being replaced by HFO (hydrofluoro-olefin) refrigerants. However, most of these Data centres are becoming increasingly important to all aspects of modern life. Driven by a rise in penetration of high-end cloud computing in enterprises, the rapid digitalisation of all industries, the new normal of remote work, the growing use of Over-The-Top services and more, it’s predicted that the data centre services market will continue to grow. Andrea Moschen Thermal Management Product Application Manager Vertiv Powering Sustainable Data Centre Growth Through Chilled Water Cooling CHILLED WATER COOLING www.networkseuropemagazine.com 36le Through g CHILLED WATER COOLING www.networkseuropemagazine.com 37new refrigerants are classified by ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) as mildly flammable therefore requiring a new design for the cooling system, potentially impacting the broader data centre design. Chilled water systems offer an excellent solution to this issue as the refrigerant is contained within chiller units and, in most applications, these are installed outside of the data centre, thus simplifying the use of flammable fluids. Chilled water systems are one of the first cooling technologies to apply low GWP refrigerants in data centre applications and therefore are an example of a valid alternative for reducing direct environmental impacts. Chilled water solutions also play a vital role in the reduction of indirect emissions (cutting energy consumption). In recent years, they have applied a range of cooling system efficiency improvements that allow a reduction in electricity usage. Chillers equipped with inverter-driven screw compressors, or oil-free centrifugal compressors, are now available, to drastically cut down electricity consumption compared to the previous technology available. An evolving solution The good news is that developments in this arena are happening all the time – delivering more efficiencies CHILLED WATER COOLING www.networkseuropemagazine.com 38and greater benefits to providers. For example, over the past few years, ASHRAE has increased the recommended operating temperature of data centre equipment up to 27°C – allowing subsequent increases to the water temperatures within chilled water systems and enabling an extended use of free cooling chillers, even in countries or climates where free cooling was not previously feasible. Free cooling technology has an important advantage as it does not require the activation of the compressor. Adiabatic technology can also improve the efficiency of a chilled-water system. In these solutions, the ambient air is cooled down by passing through wet pads. The air is then delivered at a lower temperature, achieving a higher free cooling capacity of the chiller and a more efficient operation of the compressor. The core of this solution is the onboard controller of the unit: it enables the use of water whenever strictly needed, according, either to redundancy, efficiency or cooling demand needs. The controller has the main responsibility of preventing water from being wasted and improving the WUE (water usage effectiveness) of the data centre. The application of water is always a matter of balancing different aspects and constraints. And even more improvements to data centre efficiency can be made through the optimisation of chilled-water systems controls. Chilled plant manager technology can coordinate the operation of all the units and main components of the chilled-water solution. It allows integration and coordination of the working mode between units and the main components, enabling improved efficiencies and performance at partial loads or, in the unlikely event of failure, finding the best way to react and grant cooling continuity to the system. The ability to achieve new targets A great example of this in practice is at Green Mountain, a ground-breaking Norwegian hydro-powered data centre where the thermal management system plays a big role. Based in a former NATO facility carved deep in a mountain, Green Mountain has signed up to the Climate Neutral Data Centre Pact, which requires providers to be completely climate neutral by 2030. To achieve this goal, the DC1-Stavanger data centre now runs on 100% renewable hydropower and, importantly, is cooled with water from the fjord, which provides a continuous temperature of 8°C (46°F) all year round. Green Mountain gained five megawatts of additional cooling capacity after the installation of Vertiv’s chilled water units, demonstrating how these systems, as part of a broader strategy, can facilitate carbon-neutral data centre configurations. And Green Mountain isn’t alone. Many hyperscale and colocation providers are now embracing the opportunity chilled-water systems present, not only from a cost and speed of deployment perspective but with sustainability front and centre. This needs to continue as we move into the next phase of the race for expanding capacity and improving the data centre's carbon footprint. With such rapid expansion and increasing pressure to achieve net-zero, data centre providers must rely on new technologies to meet the requirements of both today and tomorrow. Many hyperscale and colocation providers are now embracing the opportunity chilled-water systems present, not only from a cost and speed of deployment perspective but with sustainability front and centre. CHILLED WATER COOLING www.networkseuropemagazine.com 39Next >