< PreviousWhy regional edge colocation data centres are key to reducing latency, backhaul costs and bandwidth bottlenecks. Minimising latency is becoming essential to enabling enterprise productivity, efficiency, competitive advantage and customer experience. Latency also impacts on 5G mobile network coverage, streaming video, AI Machine Learning and Deep Learning applications, such as those required for industrial automation and medical environments. Not to mention the performance and safety of driverless vehicles. Moreover, with significantly more of the population working from home and many moving out of expensive cities to work in semi-rural areas, there is growing pressure on backhauling traffic to the few hyperscale data centres in the UK. All of the above is leading to the emergence of a new breed of ‘edge’ colocation data centres. These are the missing links between centralised clouds and users, computers, machines and devices at the network edge. They are highly connected - including links to local internet exchanges - and located in proximity to large conurbations and densely populated cities up and down John Hall Managing Director - Colocation Proximity Data Centres Latent Tendencies – the Missing Link latent tendencies www.networkseuropemagazine.com 60s k latent tendencies www.networkseuropemagazine.com 61In a country the size of the UK, it may sound relatively straightforward. After all, the physical distances involved between regions and large cities are relatively small compared to say, the US or China. However, away from the London metro area, easily the most densely populated area with around 9.4 million, there are some 57 million dispersed somewhat unevenly. For example, the large conurbations of Greater Manchester and the Birmingham area have around 2.5 million people apiece. But the East Coast and Southwest populations are considerably smaller by comparison - and spread more thinly. Scotland has a much smaller population than England but there are anomalies with very large populations around big cities such as Glasgow - there’s no one size fits all data centre solution. So, choosing the right sites in the right locations will ultimately pay off in terms of your organisation’s operational efficiencies, bringing increased agility, competitive advantage and cost reductions. With this, consideration must also be given to the number of hops and where on the network an edge colocation site will be situated - these factors will impact its suitability to meet specific latency use cases. Access to local internet exchanges and public cloud infrastructure via gateways are further factors. the UK. Furthermore, edge colocations can relieve centralised data centres from becoming data traffic congestion areas, caused by the thousands of households and small businesses now connecting to FTTP at up to 1Gbps. Strategic locations Strategically positioned regional colocation facilities are pivotal to the success of edge computing deployments. However, it is important to carefully consider the geographic location of your target users and customers before committing to existing data centre facilities. In the UK, it is worth noting outside of Greater London there are around 40 densely populated urban areas. latent tendencies www.networkseuropemagazine.com 62Deciding factors However, in the rush to move data much closer to users and customers, it is important not to overlook a potential edge data centre’s overall credentials. Network latency is obviously key, but so too are factors such as uptime service record, physical and cyber security, DR and business continuity contingencies. Carbon credentials and energy efficiency (PUE) are further considerations along with forwards power availability for keeping pace with future requirements. The level of on-site engineering competence available is also important, especially for configuring and interconnecting complex hybrid cloud environments. By connecting public and private clouds together, hybrid clouds can optimise available compute, connectivity, bandwidth and storage capabilities which enhances application responsiveness, user experience and productivity. This entails hosting private clouds in one or several edge colocation facilities and connecting these to public cloud services hosted by service providers in centralised hyperscale data centres. Mission-critical applications are therefore securely contained within the private edge cloud environment with only data that is non-time-critical sent back to the public cloud – perhaps for further analysis or archiving. The flexibility to carry out pre-production testing of applications in the data centre will be a bonus, ensuring everything works prior to launching. There are also logistical issues to consider, not least installing new servers or moving existing ones from elsewhere. This will need to be done quickly and with minimal downtime and so will most likely require specialist support. Therefore, an operator that provides door-to-door migration services could be a major benefit along with the ability to carry out pre-production testing in the data centre to ensure everything works prior to launching. Straightforward SLAs and single contracts covering all edge colocation sites in an operator’s portfolio will save management time and complexity. Dealing with several smaller data centres owned by different suppliers, all with various terms and conditions, brings hidden costs. In summary, performing much of the data processing, control and management of local applications in edge colocation data centres allows latency to be reduced and application responsiveness optimised. At the same time, data transit costs can be significantly reduced by eliminating the need to send everything back to centralised clouds – often hosted in large data centres located hundreds of miles away. With the growing demands and concerns from users surrounding latency, network bandwidth congestion and rising backhaul costs, much more strategically positioned ‘edge’ regional colocation facilities are becoming essential to the success of UK edge deployments. latent tendencies www.networkseuropemagazine.com 63But conventional asset inventory is all too often done manually, which can be both time consuming and liable to error. IT environments are becoming more complex, which means that these approaches are rapidly becoming even more outdated. There is, however, another approach. By using modern cybersecurity asset management, companies have the option to build a more comprehensive, real- time inventory of their assets, which allows them to uncover gaps and trigger automated response actions if devices or users that connect to the network deviate from agreed-upon security policies, controls and expectations. Let’s have a look at two key use cases: device discovery and vulnerability management. Device Discovery Hundreds, sometimes thousands of devices, users, software applications and cloud instances are Cybersecurity asset management is a foundation stone of any security program and fundamental when regulations need to be met. Whether it relates to device discovery, endpoint protection, cloud security, vulnerability management, or anything in between, it is impossible for an organisation to be truly secure unless network and security managers have a complete understanding of everything in their IT environment. Fabian Libeau, VP sales EMEA at Axonius Why It’s Important to Build a Cybersecurity Asset Inventory cybersecurity assets www.networkseuropemagazine.com 64connected to today’s networks for management, tracking and security. The sheer number and complexity of these connections mean that gaining a credible and comprehensive asset inventory can be a major challenge for security and network managers. Unmanaged devices, particularly if they are not known or accounted for, may evade an asset inventory. A mobile phone or a connected printer, for example, cannot be protected by security tools if it’s unknown to the network. Asking Active Directory (AD) to find it won’t work, and manually comparing AD data, network management and endpoint security software is time- consuming and not guaranteed to be accurate. The only way to efficiently gather data on unmanaged devices and determine whether they need to be a part of a patch schedule or have an agent installed is to use a process that continuously monitors for them. When it comes to ephemeral devices or those that last for a short time, such as containers, cloud workloads, or virtual machines, they can be known and authorised, but it is challenging for security and network teams to identify their presence in real-time. Agent-based approaches tend to fall short as many ephemeral devices don’t have an agent to begin with and network-based tools often lack the contextual data points needed to identify these devices. Without ensuring that the devices have been patched or security agents have been deployed, organisations are putting themselves at greater risk. Finding ephemeral devices requires connecting the sources of where devices are created and deprecated, and that means implementing continuous asset discovery capabilities. Vulnerability Management All IT systems are vulnerable, and the gap between vulnerabilities and resource capacity means that it is necessary to prioritise where the greatest risks lie, and by the same token, what the company is willing to ignore. Vulnerability assessment tools are great for identifying known vulnerabilities on devices. The challenge is ensuring that all devices, including cloud instances and virtual machines, are scanned. The most effective way of gathering this information is to compare two or more trustworthy data sources to help identify any gaps. The challenge for most companies is that data exists in silos which can make comparisons difficult and time-consuming, leaving too much room for human error. Asset Management for Cybersecurity is the Solution Asset management platforms for cybersecurity work by gathering an inventory of all the assets that are actively connected to the network so security and network managers can see at-a-glance which ones are in-scope and how they are configured. The platform works by aggregating data from different sources, assessing which devices are unmanaged or misconfigured and comprehending whether each asset complies with or deviates from agreed-upon security policies. This means that vulnerabilities are constantly being monitored across the entire IT infrastructure. In addition, audits and penetration tests can be streamlined and the company remains compliant with industry regulations. However, if organisations want to get the full potential out of their cybersecurity asset management solution, they need to ensure it is automated, continuous, easy to use and fast to implement. It should integrate with its existing security and management tools so it can derive asset details from a variety of data sources, regardless of whether they are managed, unmanaged, on-premise, or in the cloud. The question for network and security managers as cybersecurity risks grow daily is whether they can take the risk of being without an always up-to-date inventory. The ability to uncover gaps in their security defences and the reassurance of action being automated whenever a vulnerability presents itself is becoming increasingly vital as the attack surface continues to expand. Why It’s Important to d a Cybersecurity et Inventory cybersecurity assets www.networkseuropemagazine.com 65Alan Hayward, Sales and Marketing Manager at SEH Technology What Do IT Leaders Need to Consider When Evaluating On-premise and cloud Data Security? cloud and data security www.networkseuropemagazine.com 66As businesses continue to collect, analyse and store a staggering amount of data, one of the biggest elements that they will need to evaluate is on-premise versus cloud security. Data security is becoming even more critical for enterprises in line with the rise in cyberattacks. With on-premise, the business’ servers and data are physically located in the office, and IT leaders can use backup and disaster recovery software to extract data during a cybersecurity threat. Alternatively, with cloud-based security, a third party company hosts the organisation’s servers and data in a data centre, providing additional support to manage the network. On-premise brings peace of mind When using on-premise solutions, businesses will benefit from end-to-end customisation across the entire network, including servers and data in the organisation. This ensures that the solution fits their specific needs, as well as gives IT leaders complete peace of mind that the security measures are fit for purpose. On-premise is also the ideal solution for companies in the legal, healthcare or financial industries, that are required to follow stricter cybersecurity policies. What’s more, these businesses may feel more comfortable hosting their data onsite, rather than across the country in a data centre. On-premise solutions also benefit companies that already have an internal IT team. While initial investments for software and hardware are higher, having a dedicated team of staff who can manage the infrastructure will ensure the business sees a rapid return on investment. This team will be responsible for keeping the on-premise security operating smoothly which means the data is both secure and close by, as well as reducing the need to outsource support for a costly fee. Bolstering flexibility and scalability in the cloud While on-premise solutions may be adequate for some businesses, others are considering making the move to the cloud for improved security. When working in the cloud, a company’s data is not secured physically at their office, reducing the risk of security breaches or threats onsite. Additionally, with cloud-based security, data centres have improved security features and a dedicated team of employees who can protect companies' data. Cloud systems will also learn about an organisation’s network over time, ensuring that the solution can scale with the company and become more secure than on-premise. Furthermore, with cloud storage, an organisation’s data can be housed in a data centre for an unlimited amount of time. This is especially important in today’s fast-paced environment as more and more jobs move to digital, so having the enterprises’ data online already can streamline business processes. Scalability is also one of the biggest advantages of cloud computing, as data centres can quickly and easily evolve their resources to meet business needs or demands. This may include company growth or expansion across the globe, as well as the move to remote or hybrid working in the post-Covid-19 environment. To further bolster security, cloud solutions will back up a business’ data in multiple places on a regular basis. Downtime can have a staggering impact on an organisation, and cloud applications can keep this to a minimum as data will be restored from a backup faster than an on-premise solution. Finally, large corporations must abide by a growing number of compliance regulations and cloud-based security can help these types of businesses to maintain the required infrastructures, as well as ensure current and future regulatory measures are met. Establishing hybrid cloud solutions Keeping this all in mind, there is no one-size-fits-all solution when IT leaders are considering on-premise or cloud security. Both applications can be customised to fit the needs of a business, meaning some organisations are now choosing to select hybrid options that boast the benefits of both on-premise and cloud solutions. A hybrid cloud is an environment that combines an on-premise data centre with the public cloud, allowing data and applications to be shared between them. For businesses with data that increases beyond the capabilities of their data centre, they can use the cloud to instantly scale capacity either up or down to handle the demand. It also mitigates the time and cost of purchasing, installing and maintaining servers onsite that they may not need in the future. When an IT leader is asked to assess which technology infrastructure is the best fit for their company, there are a growing number of factors that they need to consider. It’s no secret that cloud computing has significantly grown in popularity, offering organisations newfound flexibility and scalability. On the other hand, on-premise software has been tried and tested by businesses and may continue to adequately meet their technical needs. In today’s fast-paced market, many organisations are now contemplating the move to cloud applications to better achieve their business goals in the longer term. cloud and data security www.networkseuropemagazine.com 67Application-Aware Networks Give a Much-Needed Edge to Enterprise Cloud application aware networks www.networkseuropemagazine.com 68The combination of office and remote working may once have seemed a short-term response to a crisis, but hybrid is here to stay. A McKinsey global survey of senior executives in large corporations found nine-in-ten want to continue with this modus operandi beyond the pandemic. Such major changes in the way enterprises function are only made possible by the user-friendly effectiveness of today’s ever-expanding range of well- known business applications and collaboration tools such as Slack, Dropbox, Zapier and Trello. They cannot function on their own however. They must be underpinned by the best connectivity possible, or the whole hybrid edifice will collapse. In its research, McKinsey exposed the critical role of networks to the current success of hybrid working. It found businesses that had maintained connectivity and facilitated “microtransactions” between employees, maintained higher levels of productivity at the height of Covid restrictions. The opportunity for MSPs Hybrid has been a big transition for enterprises, but for MSPs (managed service providers), it presents new opportunities and demands. Many businesses rushed into hybrid working and are now reappraising how they extract maximum efficiency from it, recognising its permanence. They expect service providers to help them reconfigure to optimise this new approach. Managed services companies need to ensure they provide these clients with the right technologies and tools and are up-to-speed with developments in applications and infrastructure. They must facilitate the as-a-service model that their clients increasingly want, and they must focus on security, user experience and integration. Optimisation is a priority As a result of all these developments in hybrid working, the demand for infrastructure, cloud storage and compute is increasing. Gartner predicts expenditure on the public cloud will grow by more than 21% in 2022, to exceed $480 billion. Many organisations are already using multiple clouds, with Flexera’s 2021 State of the Cloud report finding enterprises use an average of 2.6 public clouds and 2.7 private clouds. This year’s Flexera report finds that enterprises already run 50% of their workloads in public clouds with the figure set to rise by 6%. This expansion of complex cloud infrastructure is becoming unwieldy, making optimisation a priority. Nearly six-in-ten respondents in this year’s Flexera report said optimising the current use of the cloud to achieve cost savings is their “top initiative”. Enterprises need to see the applications on their network Since modern business and AI applications depend on fast, high-bandwidth networks, organisations must make the best use of their cloud connectivity. They must know where applications are moving throughout their infrastructure, and how best to manage and control them to deliver optimal performance. Teams collaborating through sophisticated, cloud-based design software do not want brownouts any more than participants in multi-player, immersive games. This all adds to the pressure to ensure low latency, reliability and security. In practical terms, it means focusing attention on the steps that improve network performance and use. For network operators, this dictates a shift towards application-aware networks to provide detailed reporting and intelligence to route applications down the best path. In the digital economy, application experience can make or break a business. Yet, making all applications visible to improve the end-user experience is not easy. It often takes far too long to troubleshoot and identify the root cause of a latency or performance problem and develop a resolution. The greater visibility delivered by an application-aware network, by contrast, allows businesses to understand and fix the difficulties for applications on their networks far more quickly, saving them time and the cost of traditionally complex troubleshooting processes. Daniel Blackwell, Pulsant e dge ud A huge surge in the use of SaaS, cloud services and distributed applications followed the mass migration to hybrid working by many thousands of companies. application aware networks www.networkseuropemagazine.com 69Next >