< Previousimproving network security 50 www.networkseuropemagazine.com Network security is the back-bone of many technology companies, as a working security system enables the safety of a whole variety of important information. It consists of all the practices used to prevent and monitor unauthorised access or misuse within a company’s database and resources. This means it’s vital that jobs within a network security department are carried out correctly, especially with the technological advances of this day and age.improving network security 51 www.networkseuropemagazine.com Artificial intelligence, otherwise known as AI, is the simulation of human processes, replicated by machines and computer systems. With new discoveries in technology being made every day, artificial intelligence is something that many businesses will need to get up to scratch on, particularly for its use in network security. Automated network security As the technology world evolves, automation seems to be the ideal way to go when it comes to network security. However, many security professionals are torn between which is the best when it comes to both automating and streamlining processes. There is a fear surrounding automated network security and its reliability and safety. The use of artificial intelligence could eliminate this fear. Security policy development and network organisation are two essential components of network security and AI can automate both of these processes effectively. It can be used to save organisations huge amounts of time and money, as AI can generate policies and procedures to cater to every unique situation. Ultimately, artificial intelligence can enable a stronger trust when it comes to automated network security and could influence greater use of automation. More powerful protection Password protection is important for just about everyone. Nonetheless, passwords are one of the weakest components of security control and once cracked, criminals could access all sorts of information about employees identities. This is where artificial intelligence could come in. Developers are beginning to discover that AI could improve biometric authentication and eliminate the weaknesses that come with password protection, so that it’s more robust. It can also be used to detect phishing and ultimately prevent it in the future. As AI has the ability to respond at a much faster pace than a human can, these technologies can identify and track over 10,000 active phishing sources. There are also AI tools that can provide information to individuals, prior to creating a password or going through the development of an authentication process. They are able to take a whole variety of words and commonly used passwords, along with previously leaked passwords, and turn them into hashes to check against the stolen hash. Although these tools still require further manual coding, they still demonstrate the improvement that AI could have on network security. Meeting new business needs With the recognition of artificial intelligence as a potential key to the future of network security, there are many more doors that can be opened for all sorts of businesses. AI pushes past the limits of traditional network infrastructure, operations and development, and brings forward a future of better security. Although technological growths is positive in many aspects, this fast-adapting digital age brings with it thousands of fraudsters who are trying to collect information. This makes it so important for businesses to find the best methods to keep their networks secure, and to continuously monitor ways of improving it, as progressive cybersecurity measures have never been more imperative than now. AI brings a whole new light to this idea and can really aid the start of smarter cybersecurity. n Improving network security with AI Alan Hayward Sales & Marketing Manager SEH TechnologyHigh Performance Computing (HPC) can enable humans to understand the world around us, from atomic behaviour to how the universe is expanding. While HPC used to be associated with a "supercomputer," today's HPC environments are created using hundreds to thousands of individual servers, connected by a high speed and low latency networking fabric. To take advantage of thousands of separate computing elements or “cores” simultaneously, applications need to be designed and implemented to work in parallel, sharing intermediate results as required. For example, a weather prediction application will divide the earth's atmosphere into thousands of 3D cells and simulate the physics of wind, water bodies, atmospheric pressure, and other phenomena within that cell. Each of the cells then needs to communicate with their neighbours the results of the previous simulation. The more processing power that is available, the smaller each cell can be, with more accurate physics. Optimised accelerator Until recently, HPC algorithms ran on CPUs from Intel and AMD. Over time, these CPUs became faster and incorporated more cores. However, a new, highly optimised accelerator is becoming integrated with HPC systems. A graphics processing unit (GPU) has enabled an increase in specific applications' performance by more than an order of magnitude. GPUs can be found in almost all of the world’s Top500 HPC systems. Thousands of applications have been modified to take advantage of thousands of GPU cores simultaneously, with impressive results. HPC is becoming integrated into enterprise workflows, rather than an entirely separate computing infrastructure. An HPC system will typically be composed of many CPUs, a significant number of GPUs, solid-state disk (SSD) drives, fast networking, and an entire software eco-system. An efficient HPC system will have a balance between the CPUs, GPUs, high speed memory and a storage system, all working together. This balance is important, as the most expensive resource in an HPC system are the CPUs and GPUs. A well- designed system will be able to deliver data to the CPUs extremely fast and never “starve” the CPUs of actual work. fighting the pandemic from the server room www.networkseuropemagazine.com 52 Martin Galle Director FAE Supermicro Fighting the from the How HPC enables cutting edge researchfighting the pandemic from the server room www.networkseuropemagazine.com 53 Fighting the pandemic from the server room How HPC enables cutting edge researchUniversities and research labs worldwide continue to invest in and create very high-end HPC systems to solve and understand the most complex challenges humankind faces today. Research labs and education institutions are exploding with the massive amounts of data now readily available to researchers. Areas of high interest that use HPC systems include bio-informatics, cosmology, biology, climate study, mechanical simulations, and financial services. Ghent University – Turning seven hours of AI experiments into 40 minutes IDLab is a research lab at Ghent University and the University of Antwerp. Their idea was to extend their research areas to include AI Robotics, IoT, and data mining. To accomplish this, IDLab determined that new servers were required. They needed to have the ability to house several GPUs within the server enclosure to ensure maximum performance. Early tests indicated that existing applications could run up to ten times faster when using GPUs than a pure CPU execution. The quicker times allow researchers to develop better AI algorithms and get results faster than ever before. One of the new servers' requirements was the ability to run multiple jobs on the same server without affecting other applications' performance. A powerful server was needed that had the compute, memory, and GPU capacity to allow this. The challenge was to increase performance by 10x to keep up with current demands. IDLab chose powerful GPU servers, that were specifically designed to handle next-generation AI applications. These servers contained two NVIDIA HGX-2 boards, that could accommodate eight GPU boards in one server matched with the appropriate CPU power. To run AI-based algorithms, the researchers needed to complete various jobs faster so that iterations could be made to these algorithms in a timely manner. The chosen server solution helped them to cut down experiments from nearly seven hours down to the needed 40 minutes while still receiving high-quality results. Understanding and tracking COVID-19 at Goethe University Frankfurt Another case where a high-performance server was needed to optimise research processes was at the Goethe University Frankfurt. The supercomputer centre is known worldwide for enabling a wide range of researchers to use one of the fastest systems in Europe. The architects of this fighting the pandemic from the server room www.networkseuropemagazine.com 54 Additional computing capacity will allow researchers to handle better the computationally intensive molecular dynamics simulations. These are critical to understand, for instance, the structure and function of the virus.new system determined that the best choice for these new servers would need to incorporate several GPUs, in addition to a high core count CPU. The required server's critical design was a very fast communication path between the CPU and GPU, utilising the PCI Gen 4 bus. Goethe University chose servers based on AMD EPYC processors and Radeon Instinct MI50 GPUs for this new HPC system. This combination allows massive amounts of data to be shipped to and from the GPU from the CPU extremely fast at up to 64 GB/second. Once the GPUs have completed their tasks, the results can quickly be sent back to the CPU. Goethe University researchers have been using this new system to track the COVID-19 pandemic worldwide, among other research initiatives. Understanding how COVID-19 spreads throughout a population, allows authorities to put policies and action plans in place to be prepared for potential, similar challenges in the future. Molecular dynamics simulations at Lawrence Livermore National Lab In the United States, the Lawrence Livermore National Labs (LLNL) recently expanded a large scale HPC system to over 11 Petaflops. This system is intended to be used to find various treatments and a vaccine for COVID-19. It will also allow leveraging computational workloads in genomics and other scientific disciplines. The Corona system, named for the total solar eclipse in 2017, was recently outfitted with a large number of servers, containing both AMD EPYC CPUs and Radeon Instinct GPUs. The additional computing capacity will, for example, allow researchers to handle better the computationally intensive molecular dynamics simulations. These are critical to understand, for instance, the structure and function of the virus. Ultimately, that’s the base for finding a cure for COVID-19. Molecular dynamics simulations have been designed to take advantage of GPUs, increasing their performance significantly. Breaking down the walls of limited computing capabilities HPC in research and academic institutions allows a wide range of researchers to focus on new science without being delayed by old and outdated servers. GPUs are increasingly being used to reduce the time to complete many tasks, enabling new algorithms to be developed and iterated. Robust HPC systems are being used to understand a wide range of scientific problems that were previously out of reach due to limited computing capabilities. n fighting the pandemic from the server room www.networkseuropemagazine.com 55 Additional computing capacity will allow researchers to handle better the computationally intensive molecular dynamics simulations. These are critical to understand, for instance, the structure and function of the virus.By David Watkins Solutions Director VIRTUS Data Centres As a growing number of organisations seek to become successful by taking advantage of today’s data-driven economy, the data centre is becoming recognised as one of the most important pieces of business infrastructure. However, the energy hungry infrastructure which powers our digital lives is often regarded as a blight on the environment and at odds with businesses’ sustainability strategies. Today, there are more than eight million data centres globally, which not only dispose of many metric tons of hardware every year, but which also account for around 1% to 2% of global power consumption, a proportion which is comparable with the level of carbon emissions of the airline industry. Experts agree that, if the industry doesn’t take action to reduce these numbers, it’s likely that the energy consumption of data centres could contribute to more than 10% of the world’s electricity supply in the next ten years. As a result, greening the data centre is a hot topic for businesses with providers and consumers. However, while data centre efficiency should certainly be debated and discussed, it must also be recognised that providers have already made great strides from the legacy data centres of years past. In construction particularly, data centres are becoming increasingly environmentally sympathetic, with some providers able to boast impressive green credentials. Reaping the benefits of a data centre strategy Experts agree that, if the industry doesn’t take action it’s likely that the energy consumption of data centres could contribute to more than ten per cent of the world’s electricity supply in the next ten years. green data centre strategy www.networkseuropemagazine.com 56Reaping the of a green data centre strategy green data centre strategy www.networkseuropemagazine.com 57A holistic strategy The most committed data centre providers are focused on meeting green targets by prioritising the delivery of a “cradle to grave” green strategy, where environmental ambitions are built into every step of construction and maintenance. When it comes to building facilities, BREEAM (Building Research Establishment Environmental Assessment Method) standards look at the green credentials of commercial buildings, verifying their performance and comparing them against sustainability benchmarks. BREEAM measures sustainable value in a series of categories, ranging from energy to ecology. Each of these categories address the most influential factors, including low impact design and carbon emissions reduction; design durability and resilience; adaption to climate change; and ecological value and biodiversity protection. As well as the commitment to meeting BREEAM specifications, many providers also employ a modular build methodology to deploy capacity as and when required. This drives up utilisation, and maximises efficiency (both from an operational and cost perspective). Looking at plant management, there are now many technologies and methodologies which can be deployed to drive efficiency. Examples of this include highly efficient Uninterruptable Power Supplies (UPS), where unused capacity can “hibernate”’ to reduce electrical losses. CRAC (Computer Room Air Conditioner) units are typically equipped with variable speed fans, that will regulate in line with demand to reduce energy consumption. Pumps are equipped with variable speed drives, that again will regulate in line with demand to reduce consumption. And, chillers often have “free cooling” functionality, where within certain temperature ranges cooling can be provided at a lower level. Ground and air source heat pumps are also being deployed, along with local energy generation all making use of clean, naturally available resources. green data centre strategy www.networkseuropemagazine.com 58Addressing the cost factor For many in the technology industries, “green” has historically meant “expensive”. But this perception is simply no longer true. As green technologies develop and become more prevalent, demand is driving down price, making it much more affordable to be environmentally aware. This is particularly evident when it comes to energy. In recent years the cost of hydrogen fuel cells has plummeted, to the point where they are an economically viable alternative for standby generation. And more widely, the cost of renewable power is increasingly cheaper than any new electricity capacity based on fossil fuels. Indeed, on average, new solar photovoltaic (PV) and onshore wind power costs less than keeping many existing coal plants in operation, and power purchase and auction results show this trend accelerating – reinforcing the case to phase-out coal entirely. In addition, green measures are supported by a number of governments around the world offering tax incentives to invest in environmentally conscious technology, in order to support carbon reduction targets at a national level. The return on investment – the rewards of being green Being a responsible operator with a demonstrated commitment to sustainability is not just the right thing to do, it is increasingly what customers are demanding, and can actually deliver commercial benefits for those who get it right. As technologies develop, demand is driving the price down, and it’s now not just more affordable to be environmentally aware, but potentially fiscally beneficial too. For example, reports show that infrastructure efficiency has improved by 16 per cent since 2014, demonstrating that where steps are taken to improve issues like heating and cooling, cost savings can be made. In terms of customer expectations, renewable energy can pay dividends. Periods of electricity price surge or downtime associated with traditional energy sources can challenge providers to maintain service at the level that their users expect – whereas renewables are already demonstrating increased reliability. Furthermore, fixed pricing in renewable energy can help manage budget volatility – again important in meeting customer demand. The clearest return on investment for companies who partner with green data centre providers is in cost savings and efficiencies. But there are wider issues at stake too. Helping to ensure that the internet, data use and smart technologies aren’t negatively impacting on the environment is a crucial tenet of fuelling a more sustainable world for the long-term. A connected planet, where remote working and e-commerce are the norm and public services are delivered online, is likely to help reduce pollution for everyone. n green data centre strategy www.networkseuropemagazine.com 59 A connected planet, where remote working and e-commerce are the norm and public services are delivered online, is likely to help reduce pollution for everyone. For many in the technology industries, “green” has historically meant “expensive”. But this perception is simply no longer true.Next >