< PreviousWhen planning and designing a FTTx network, choosing the right fibre optic cables is crucial. The selection is influenced by a variety of technical, environmental, and economic factors and the type of FTTx application. 1. General selection considerations Cost: balancing budget with requirements is key. In particular, slightly over-specifying can be more economical in the long run due to reduced maintenance or upgrade needs. However, overspecifying throughout the network will drive up costs to unacceptable levels. Cable management: always double-check measurements, make sure terminations are of the right quality, test where necessary, always label and colour-code cables, watch out for cramped conduits and make sure no cables or bundles rest upon others. Otherwise, signal interference, crosstalk, damage and failure may result in data transmission errors, performance issues and downtime. Density: data-hungry technology will keep growing but the cabling backbone can’t be replaced every few years. High-density cabling and network infrastructure are essential to meeting current bandwidth demand challenges. Planning needs to take the need for more ports and fibre cables at access points and at and between data centres into account. Cables with a very high fibre count, that retain the handling attributes of small cables, with minimal termination-related hassle, are required. Automated asset management and tracking: Dynamic environments require on-going, precise, and efficient asset management. A system that offers functions for real-time mapping, managing, analysing, and planning cabling can include asset management. This improves operational efficiency and facilitates passive infrastructure management. Everything can be monitored and administrated from a common software tool. The entire infrastructure is represented in a consistent, up-to-date database. Without this, developing expansion plans, carrying out risk analyses, complying with legislation, and introducing best practices are impossible. 2. Solutions for harsh conditions Moisture and water resistance are crucial for underground and direct- buried installations. Extreme temperatures or chemicals can cause cables to become brittle or less flexible after prolonged exposure. The cabling sheath could be damaged, causing rapid deterioration. Fortunately, special sheathing materials can increase resistance to chemical degradation. It is important to know which environmental factors are relevant in different locations as well as the exact intended usage of cabling and components. In this way, you won’t need to run cable with the most heavy-duty shielding throughout the entire environment. Instead, you can define solutions that offer the best performance where it counts, without compromising in other areas. Key factors to consider include: Hermann Christen, market development manager, R&M www.rdm.com Choosing the right solutions for the last mile and harsh environments FEATURES 40> Chemical load (intensity and duration of chemical influence) > Concentration, exposure, and temperature > Chemical resistance > Rodents Rodents are one of the biggest threats to cabling, especially if buried without conduit. We can distinguish between two levels of cable protection: Rodent protected cables prevent damage to the cable core in cases of moderate rodent infestation. We recommend U-DQ (BN)-type armoured cables with glass roving (e-glass yarn) which protects the cable core. These cables provide protection through mechanical resistance for a prolonged period, which is generally adequate, but not unlimited. Rodent secure cables feature an additional; layer that ensures rodents can’t chew into the cable core. Steel tape armoured cables of types U-DQ (ZN) (SR) (one cable sheath), or U-DQ (ZN) H (SR) (two cable sheaths) are recommended for outdoor use in ducts and shafts where rodent infestation cannot be prevented. Steel wire armour offers more protection than glass roving armour cables. Galvanized steel wires are introduced between inner and outer sheath. Cables with FRP rod armouring - profiles largely made of glass and resin - offer a level of protection similar to steel wire but are all-dielectric, an advantage in areas with electromagnetic fields (such as train tracks). It’s also important to consider fibre cable shrinkage: a cable’s outer sheath has a higher thermal expansion coefficient than its core. Controlling and reducing cables’ shrinkage directly improves their mechanical and optical performance. Environmental temperature cycling can cause the outer part to shrink, placing stress on the fibre, which causes micro bending. Fibre cables are designed in such a way that excess optical fibre length compensates for thermal expansion of the surrounding tube. This ensures elongation of the cable construction doesn’t directly affect the fibre. ‘Overlength’ protects the core from bending stress or tension. However, the disadvantage of designed-in fibre overlength is the fact that it becomes more pronounced as the outer cable shrinks. Low shrinkage cables display low irreversible shrinkage at higher temperatures. Attenuation deviations of low shrinkage cables are stable over the entire duration of a temperature cycling test. A cable jacket with low-shrink characteristics maintains optical performance during temperature variations. 3. Solutions for the last mile Cable installation depends on many factors: population density, geology, intended use, reuse of existing infrastructure… There’s more to finding a solution than simply applying the most robust sheathing to every cable. You need to consider cost, as well as the fact that improving performance in one area affects cabling in other areas, for example by making handling during installation harder. Cabling on a single site might run through production facilities area as well as offices. One way of adding fibre cable to a conduit is by pulling it through ducts. Key mechanical properties for such cables are tensile strength and dead weight. Polyethylene (PE)-sheathed cables have excellent surface properties and sliding properties (low frictional resistance) for outdoor applications. Universal-use cables of type U-DQ(ZN) or U-DQ(BN) are also suitable. For shorter outdoor runs we recommend armoured cables with glass roving of type U-DQ (BN) loose tube cable with a FRLSZH outer sheath. For buried cables, blowing into pre-laid ducts is the most economical installation method. Cables for blowing should be lightweight, slim, relatively stiff, and have an outer sheath with excellent sliding properties. Micro cables type A-D2Y with a loose tube of 1.2 mm (4 fibres) or 2.0 mm (12 fibres)/3.0 mm (24 fibres) are ideal for blowing thanks to low weight, optimum stiffness, and minimal outer diameter. Cables that are ‘direct buried’, without ducts, need to offer a high level of crush resistance and longitudinal water tightness. High Density Polyethylene (HDPE) sheathing is advisable - PE is sufficiently resistant to all chemical influences that direct-buried cables might conceivably be exposed to. We recommend corrugated steel tape armoured cables of type A-DQ (ZN) (SR) 2Y or the double sheathed version A-DQ (ZN) 2Y (SR) 2Y. Aerial cables are continuously exposed to environmental conditions, so requirements are far higher than for buried cables. We recommend air cables encased in either UV-stable HDPE or FRLSZH outer sheathing designed for a temperature range of -25 °C to +70 °C). In aerial drop applications, being all-dielectrical and self- supporting are essential. U-(ZN) H type cabling can traverse span lengths of up to 70m. By carefully considering the above factors and working with knowledgeable vendors or consultants, you can make informed decisions when choosing fibre cables for your FTTx network. FEATURES 41Network visibility is a fundamental security objective in today’s environment. Gaining oversight into suspicious traffic empowers threat detection, informs overall security posture management efforts, and forms the foundation for a Zero Trust strategy. Mark Jow, Technical Evangelist at Gigamon, outlines the impact of encrypted traffic on this goal and discusses ways to implement effective decryption. Encryption has long been used for security purposes, can you explain how it also poses a threat? First, let me outline today’s data centre security challenges. It is no secret that data centres are processing an immense amount of traffic. As business data grows and organisations move into increasingly hybrid cloud environments, monitoring all of this traffic for security risks becomes a real challenge. Malicious actors often hide in networks to exfiltrate sensitive data and maximise their impact, meaning data centre operators now need to have oversight over all East-West traffic for effective threat detection. Despite network visibility being a known security priority, Gigamon research found that less than half (48%) of organisations have insight into data moving laterally across their networks. Mark Jow, technical evangelist, EMEA at Gigamon www.gigamon.com Uncovering the encrypted threats flowing through your data centre FEATURES 42FEATURES 43FEATURES 44At the same time, security leaders have turned to encryption to hide information from prying eyes. Today, encryption makes up over 90% of all internet traffic, and the proportion of encrypted traffic within internal enterprise networks is growing fast. While encryption is a proven way to protect sensitive information, it also contributes to poor network visibility without robust decryption and analysis. Cybercriminals are becoming increasingly adept at using our security methods against us, and encrypted traffic is a powerful tool in their playbook. The same confidentiality that protects data also hides malware, suspicious activity, and data exfiltration from our security teams. This doesn’t mean encryption is failing, but data centre security needs to step up to the challenge of decrypting traffic for better threat detection and response. Why is it important to decrypt traffic now? It might sound like a cliché, but the first step towards preventing encrypted threats is to acknowledge the risk they pose. It is impossible to defend against a threat you cannot see and can’t measure within your own data centres. Encryption is playing a larger role in attacks, and with great success. Over 90% of malware attacks now use encryption to evade detection, and yet a Gigamon survey found that only 30% of IT and security professionals have visibility into encrypted traffic. Businesses already understand the danger of security blind spots - over half of respondents in the same survey named it their top concern - but until now the practical challenges have been holding security teams back. The result is that these accepted un-analysed threats are contributing to a threat landscape in which almost 1 in 3 successful breaches go undetected. In recent years, a rise in microservices and the increasing adoption of containerised applications have exacerbated this trend by increasing internal traffic. Machine-to-machine or server-to- server traffic is the perfect means for an attack to proliferate and spread. Not letting lateral movement go unmonitored has become much more important in the last few years. Why isn’t traditional decryption working? How could it work better? Unfortunately, traditional decryption is just not a viable solution for many data centre operators. Decryption efforts usually take place at the perimeter, decrypted at the firewall, in an appliance, or using a load balancer. These methods require a lot of configuration, key management, and vast amounts of compute power to break encryption, inspect the contents, and then re-encrypt. When most traffic is encrypted, it adds up. There are a few tricks to save resources in the decryption process. Businesses investing in decryption can take steps to minimise redundant traffic inspection and identify which network packets should be prioritised. Tactics such as application filtering, in which traffic signature is used to distinguish between high- and low- risk applications, enable teams to apply risk-based decryption across their data centres. However, it’s important to remember that these assessments are not foolproof. Another similar, low-cost approach lies in deploying data management strategies to reduce the traffic flow across a data centre and achieve the necessary visibility to assess risk. Data centre networks are structured with resiliency and availability in mind, but this approach creates duplicate network packets across a network. In the costly case of decryption, implementing deduplication ensures that each network packet is only analysed once. These approaches are a good first step to reduce the financial and compute drain of a more robust decryption effort, but the reality is that inspecting all traffic is the only way to stop encrypted threats. To meet this need, today’s fast-growing networks need a more efficient decryption solution that is low cost, low CPU, and simple. Precryption technology is the solution. Technology is always evolving, and decryption is no different. It is now possible to rethink the challenge of encrypted traffic visibility, and Precryption resolves the issue by going directly to the source. Without a plausible, affordable decryption method, data centre operators and owners have largely accepted defeat when it comes to encrypted threats - or invest in a very heavy solution that requires large-scale changes to networks and drains energy, space and budget. But inside each network packet lies the key to a new approach. Precryption works by accessing the Linux kernel to get plain text traffic visibility before encryption is ever performed. In doing this, security teams use less compute power to analyse more traffic. By cutting the decryption and re-encryption stages out of the equation, it also allows security tools to detect threats quickly even as traffic continues to increase. Until now, encrypted threats have been thriving in a perfect storm of low visibility, high traffic, and optimistic risk assessments. Now is the time to face up to this risk and pursue true traffic visibility. FEATURES 45FEATURES 46As essential organisations such as network carriers, government agencies, and corporations – which help to save lives, ensure vital goods and services are streamlined and enable hybrid or remote working for millions – become more complex, their digital ecosystems also become harder to manage and more vulnerable to disruption. This complexity is to be expected. For the organisations that keep the world on track to move forward, progress can’t stop. In the modern world, that means digital transformation should not disrupt everyday operations. As such, enterprises should consider how to reduce the risk of massive disruption across digital services while pursuing cloud migration, customer experience, supply chain, and intelligent automation strategies at the same time. As essential organisations grow, they place more data and compute resources at the edge of operations, therefore increasing their attack surface. For most large, global businesses, everything is connected, with the internet essentially becoming the corporate network. Defence and performance tools which may have worked in years gone by could develop defects and frailties, leading to the emergence of new vulnerabilities. Nowadays, performance, security and availability/ distributed denial-of-service (DDoS) threats are ever-growing and tend to be interconnected. Add a shortage of cybersecurity expertise to this, and the requirement for large organisations to adopt a modern, automated and visibility-driven platform approach in order to detect, explore, and resolve these issues becomes clear. Installing a visibility platform To address these challenges, organisations should seek a solution that leverages a common data foundation, using real-time network intelligence to break down the borders and remove the blind spots that can delay or slow down modern IT initiatives, from digital transformation and talent management to system consolidation and cross-team collaboration. A solution like this is commonly known as a visibility platform. For the platform to be effective, it must have seven key features: Any application It is important that the visibility platform can support all applications. This should be the case whether the application is standard, custom, expertise-required, or customer-facing. Only then can the programme support team confidently highlight and resolve any network or security problems, significantly reducing mean time to resolution (MTTR). This is critical, since network performance issues have revealed several major security breaches in recent years. Any scale The platform must provide the same type of data, irrespective of domain size – small, medium or large enterprise – running at any speed in excess of 100G. To perform in this manner, a truly effective visibility solution generates a reliable, scalable, accurate, Michael Szabados, COO, NETSCOUT www.netscout.com Minimising disruptions in the digital ecosystem FEATURES 47historical, real-time data set which represents all activities taking place on the network, wherever the digital ecosystem is operating from. This data set must be taken from data packets, which are the customary atomic units comprising any digital environment, defining the nexus of both performance and security. Anywhere A visibility solution should operate seamlessly from end-to-end at any scale, regardless of whether the enterprise runs on-premise, fully in the cloud, or is a mix of both – a hybrid/co-location model. Being able to use the same data type and blueprint ensures continuous, unbroken visibility, with the ability to scale everywhere with the business. Anytime In today’s world, visibility is critical for organisations. A visibility platform must be able to perform in- depth, protocol-level analysis and forensic evidence collection utilising either real-time data captures or historical data mining. This offers enterprises a holistic, comprehensive view of the digital environment. Any operational team A visibility platform that obtains contextual metadata from data packets produces a shared source of objective information, encouraging cross-team collaboration. This includes NetOps, SecOps, or AIOps teams. The potential to extract from the same data source provides a faster path to the root cause, as well as the ability to differentiate between cause and effect, prioritise examinations of network abnormalities based on possible outcomes, and select the appropriate response for whatever business risk enterprises are facing. Adding to this, the data supplies evidence for stakeholders and service providers, preventing organisations from spending unnecessary time playing the blame game. Any ecosystem It is essential that the solution is sharable with existing analytic platforms, such as Splunk and ServiceNow. So as the packet-level visibility data becomes a connective tissue and a source of dexterity and reduced risk for the whole IT team, it needs to be sharable with analytics and security platforms. Being able to feed rich, root-cause data to these applications maximises their capacity to effectively manage cloud costs and software assets, as well as to streamline operations. Any vendor The platform must support any amalgamation of cloud or network equipment monitoring vendors which the business has deployed within its ecosystem. For global enterprises and organisations to prevent digital disruption while continuing to provide essential products and services, they must assure availability, performance and security for their increasingly complex digital environment – irrespective of where their employees and customers are. This requires businesses to implement a visibility platform. As a scalable, consistent, real-time visibility solution which operates across all of their online assets, it will ensure the continued growth and success of an essential organisation. It may also improve customer experience, operational efficiency, and performance for those who are reliant on these products and services on a daily basis. A visibility solution should operate seamlessly from end to end at any scale FEATURES 48FEATURES 49Next >