< PreviousGone are the days of a centrally managed data centre and physical application stacks. Today, companies rely on private and public cloud, on-prem, multiple content delivery networks, serverless architecture, containers and so on. which they can do this only increases usage and tests the boundaries of performance mechanisms in place. Thousands of new devices of every variety connect to the internet every second, and many of those are accessing corporate networks. This creates network congestion, increasingly necessitates sophisticated routing and makes it even more challenging to ensure low latency and performant delivery. Change #2: Application delivery environments have become more complex In response to this and other trends, networking infrastructure has grown more complex. Gone are the days of a centrally managed data centre and physical application stacks. Today, companies rely on private and public cloud, on-prem, multiple content delivery networks, serverless architecture, containers and so on. Pushing infrastructure closer to end-users helps control for latency in some respects; however, increased complexity without automation also increases the risk for outages and issues, as human errors or bugs are magnified. Complex applications and infrastructure rely on automation, and performance teams need to rely on tools that are also automated. There are also more factors at play that can impact performance, which are outside an organisation’s control. Their cloud or CDN provider can experience an outage — whether due to a DDoS attack or unintentional error — and can in turn take down the company’s website or application unless they have invested in redundancy within their infrastructure. The device or network that is being used to access the application may have issues of its own, for example, a slow home WiFi network or a poor mobile connection. Change #3: Operating Models Have Evolved Given the changes above, operating models have had to shift as well. Without software-defined infrastructure, automation and DevOps practices that can speed up integration and continuous delivery pipelines, it’s nearly impossible for organisations to manually and efficiently manage such complex, dynamic systems. Now companies are leveraging automation through infrastructure as code to deploy and manage applications on the public cloud. They also use automation to manage their load balancing, networks and network segmentation, and the constraints of data and data encryption management. To reach their more distributed audiences, they must then scale all of this out geographically to the edge and utilise observability to extract accurate business intelligence. Visibility of an entire environment is critical to ensuring reliable and fast application delivery. Yet collecting, measuring, and analysing data on the performance of an entire application delivery environment has grown incredibly complicated. Without API-first infrastructure, it is nearly impossible to sift through all the data from the different technology layers of an organisation, draw conclusions from it and take action within a reasonable time frame. A new way to approach low latency in applications Companies need to create the kind of environments that prepare for application, device, and internet shortcomings and ensure consistent, superior experiences are delivered to users. They need to understand how to measure their environment and how to incorporate latency improvements by design. While they can control the internal multi- layered approach, they also need to account for what is happening in the broader internet environment that can affect the performance of their applications. Combining this knowledge with data on the internal application stack will help them to understand how to improve performance and enhance experiences. They also need to consider how they support the continuum of infrastructure that applications span — from serverless platforms to public cloud infrastructure and data centres. The continuum has a variety of factors that may low latency www.networkseuropemagazine.com 20Today’s application audiences have no tolerance for delays and will abandon a service provider if they can’t access an application quickly and easily. This means performant application delivery is now paramount to business success. impact performance, but users have expectations that must be met regardless. This is why edge networking has become so important. Routing data and network traffic in a more optimised way across distributed footprints enable lower latency and it also gives companies more control while allowing them to use existing infrastructure. To put this into context, consider the healthcare sector, where finely tuned edge networks that account for latency allow medical diagnostic tools to work quickly to produce results that accelerate treatment and diagnosis. Other examples include global content providers, such as Netflix, or collaboration companies like DropBox that are intelligently orchestrating their application traffic across huge, distributed networks to ensure low latency, high- quality content delivery to users. Low latency must be regarded as a feature that is built into an organisation’s infrastructure from the start, a top tier one priority for companies to deliver, as superior performance is critical to the end-user experience. Today’s application audiences have no tolerance for delays and will abandon a service provider if they can’t access an application quickly and easily. This means performant application delivery is now paramount to business success. n low latency www.networkseuropemagazine.com 21Claus Dietze Chair of the TCA Board TREs and the Enabling a trusted Trusted Connectivity Alliance (TCA) and IoT Security Foundation share a common vision: to secure the IoT and drive sustained growth through trusted connectivity. In this article, Claus Dietze, Chair of the TCA Board, explains how Tamper Resistant Elements (TRE) can help the IoT achieve its potential. TREs and the IoT www.networkseuropemagazine.com 22d the IoT connected future The IoT landscape is notoriously under-secured. In the rush to meet the demand for online products, services and infrastructure, many manufacturers have adopted a ‘connect first, think later’ strategy where security has been an afterthought. This has resulted in years of serious security and privacy breaches, ranging from hacked baby monitors to the disablement of a Ukrainian powerplant. Today, nearly everything can be brought online. Yet potential brings challenges. Connecting IoT devices on this scale is exposing more homes, hospitals, power plants and other critical infrastructure to cyberattacks. Now, regulators and authorities across the world are stepping in and finally getting serious about securing the IoT. Although a common global standard for IoT security has not yet been realised, strong progress is being made. Similar principles and best practices are being emphasised, such as the importance of secure storage and communication of credentials, the protection of personal data, software and firmware integrity, in addition to ensuring secure and reliable connectivity both between devices and from devices to the cloud. TREs and the IoT www.networkseuropemagazine.com 23It is also becoming increasingly apparent that hardware technology offers the highest levels of protection needed for such robust security requirements. Tamper Resistant Elements (TREs), for example, are already deployed as SIMs and eSIMs in billions of devices globally to deliver trusted connectivity to cellular networks. What is a TRE? A TRE is a standalone secure element or a secure enclave, consisting of hardware and low-level software, providing resistance against logical and physical attacks, capable of hosting secure applications and their confidential and cryptographic data. These features combine to give TREs a unique ability to offer the most stringent secure end-to-end connectivity solutions. Importantly, there are significant advantages to leveraging these TRE-based SIM products to protect all types of devices across the entire IoT ecosystem. What is not widely recognised is that TREs are available in removable, embedded and more recently, integrated form factors – more commonly known as the removable SIM, eSIM and Integrated SIM. An established platform for secure authentication and trusted connectivity Firstly, the tens of billions of devices (and growing) that use cellular connectivity worldwide already contain TRE-based SIM products. The SIM application is required to authenticate a device’s access to cellular networks and the SIM is the most widely distributed, secure application delivery platform in the world. By leveraging the advanced capabilities of TREs already contained within their product, device manufacturers can quickly address security pain points with minimal investment and without having to reinvent the wheel. This leaves more time and resources to focus on their core business. Importantly, TREs can also be easily leveraged to secure connectivity to a range of non-cellular networks. This means IoT devices that do not use cellular networks also stand to benefit from TRE technology. By leveraging the advanced capabilities of TREs already contained within their product, device manufacturers can quickly address security pain points with minimal investment and without having to reinvent the wheel. TREs and the IoT www.networkseuropemagazine.com 24An untapped platform for protecting data at rest and in transit TRE-based SIM products support advanced functionality which enables the highest level of security when storing credentials on the SIM and personal data on the device. But the security benefits go beyond the device. By using the untapped potential of the SIM as a secure hardware Root of Trust (RoT), devices can securely connect or authenticate themselves to IoT cloud platforms and services and establish a secure communication channel for the safe transportation of data. This capability is supported by industry initiatives such as IoT SAFE – a partnership between GSMA and TCA – which defines a standardised way to leverage the SIM and eSIM to securely perform mutual authentication between the IoT device applications and the IoT service within the cloud. Future-proof security through remote management Finally, the sheer scale of the IoT is making remote management capabilities critical. TRE-based SIM technology is supported by an established, certified infrastructure that enables secure in-factory and in-field provisioning and personalisation, remote lifecycle management and security services. This allows security to be enhanced and updated throughout a device’s lifetime. For example, secure credentials can be provisioned remotely to a device or on the factory production line to support a secure-by-design approach, without impacting manufacturing processes. And since IoT security is not static and threats evolve over time, SIM remote management technology enables these credentials and security parameters to be updated, enhanced or revoked to address new and emerging threats. Striking this balance between robust security and simplicity is particularly important where devices have long lifespans and potentially multiple owners, such as vehicles. Enabling trust in a connected future It is clear that addressing security and privacy vulnerabilities across the IoT landscape is an urgent priority, but also poses significant challenges. While the ability of the SIM application on the TRE to provide trusted connectivity between the device and cellular network is well-known, there is vast and untapped potential for the TRE to be used far more widely in connected devices for unsurpassed security features and services. This will help promote the sustained growth of a connected society through trusted connectivity which protects assets, end-user privacy and networks. n TRE-based SIM technology is supported by an established, certified infrastructure that enables secure in-factory and in-field provisioning and personalisation, remote lifecycle management and security services. TREs and the IoT www.networkseuropemagazine.com 25In recent years edge computing has become one of the most prevalent topics of discussion within our industry. In many respects, the main purpose of edge data centres is to reduce latency and delays in transmitting data and to store critical IT applications securely. In other words, edge data centres store and process data and services as close to the end-user as possible. Edge is a term that’s also become synonymous with some of the world’s most cutting-edge technologies. Autonomous vehicles have often been discussed as one of the truest examples of the edge in action, where anything less than near real-time data processing and ultra-low latency could have fatal consequences for the user. There are also many mission-critical scenarios, including within retail, logistics and healthcare, where a typically high-density computing environment, packed into a relatively small footprint and a high kW/rack load is housed within an edge environment. Drivers at the edge According to Gartner, by 2020, internet- capable devices worldwide reached over 20 billion, and are expected to double by 2025. It is also estimated that approximately 463 exabytes of data (1 exabyte is equivalent to 1 billion gigabytes) will be generated each day by people as of 2025, which equates to the same volume of data as 212,765,957 DVDs per day! While the Internet of Things (IoT) was the initial driver of edge computing, especially for smart devices, these examples have been joined by content delivery networks, video streaming and remote monitoring services, with augmented and virtual reality software, Today, edge data centres need to provide a highly efficient, resilient, dynamic, scalable and sustainable environment for critical IT applications. At Subzero Engineering, we believe containment has a vital role to play in addressing these requirements. Gordon Johnson Senior CFD Engineer Subzero Engineering The role of containment in mission-critical edge deployments role of containment www.networkseuropemagazine.com 26nment cal edge role of containment www.networkseuropemagazine.com 27expected to be another key use case. What’s more, transformational 5G connectivity has yet to have its predicted, major impact on the edge. Clearly, there are significant benefits in decentralising computing power away from a traditional data centre and moving it closer to the point where data is generated and/or consumed. Right now, edge computing is still evolving but one thing we can say with certainty is that the demand for local, near real-time computing represents a major shift in what types of services edge data centres will need to provide. Efficiency and optimisation remain key An optimised edge data centre environment is required to meet a long list of criteria, the first being reliability as edge facilities are often remote and have no on-site maintenance capabilities. Secondly, they require modularity and scalability, the ability to grow with demands. Thirdly, there’s the issue of a lack of a ‘true’ definition. Customers still need to define the edge in the context of their business requirements, deploying infrastructure in line with business demands, which can of course affect the design of their environment. And finally, speed of installation. For many end-users time to market is critical, so an edge data centre often needs to be built and delivered on-site in a matter of weeks. There is, however, one more important factor to consider. An edge data centre should offer true flexibility, allowing the user to quickly adapt or capitalise on new business opportunities while offering sustainable and energy-efficient performance. Edge data centres are, in many respects, no different from traditional facilities when it comes to the twin imperatives of efficiency and sustainability. PUE as a measure of energy efficiency applies to the edge as much as to large, centralised facilities. And sustainability, especially the drive towards Net Zero, is a major focus for the sector in its entirety. However, what will change over time is the ratio of edge data centres. By 2040, it’s predicted that 80% of total data centre energy consumption will be from edge data centres, which begs an obvious question: what will make the edge energy-efficient, environmentally responsible, reliable and sustainable all at the same time? role of containment www.networkseuropemagazine.com 28The role of containment Containment is almost certainly the easiest way to increase efficiency in the data centre. It also makes a data centre environmentally conscious because, instead of consuming energy, containment saves it. This is especially true at the edge. Containment helps users get the most out of an edge deployment because containment prevents cold supply from mixing with hot exhaust air. This allows supply temperatures at the server inlets to be increased. Since today’s servers are recommended to operate at temperatures as high as 80.6ºF (27ºC), containment allows for higher supply temperatures, less overall cooling, lower fan speeds, increased use of free cooling and reduced water consumption – all important factors when it comes to improving efficiency and reducing the carbon footprint at the edge. Further, a contained solution consumes less power than an application without it, which means an environmentally friendly, cost-effective environment. Additionally, it improves reliability, delivering longer Mean Time Between Failures (MTBF) for the IT equipment, as well as lower PUE. Uncertainty demands flexibility At Subzero we believe an edge data centre needs to be flexible and both quick and easy to install. It needs to be right-sized for the here and now, but capable of incremental, scalable growth. Further, it should allow the customer to specify the key components, such as the IT, storage, power and cooling solutions, without constraining them by size or vendor selection. Thankfully, there are edge data centre providers who now offer an enclosure built on-site in a matter of days, with ground-supported or ceiling-hung infrastructure to support ladder racks, cable trays, racks and cooling equipment. These architectures mean the customer can choose their own power and cooling systems and once the IT stack is on-site and the power is connected, the data centre can be up and running in a matter of days. Back in 2018, Gartner predicted that, by 2023, three-quarters of all enterprise-generated data would be created and processed outside a traditional, centralised data centre. As more and more applications move from large, centralised data centres to small edge environments, we anticipate that only a flexible, containerised architecture will offer end-users the perfect balance of efficiency, sustainability and performance. n role of containment www.networkseuropemagazine.com 29Next >