< PreviousIs there agility to match the innovation? Most digital transformation projects are naturally geared towards innovation and futureproofing, and you’d be right to assume that the larger, well-known players in the industry are able to deliver this for you. However, they are no longer the only option. The choice between the two really comes down to what level of expertise you require as both types of integrators can deliver innovation. What’s at the heart of the business model? Large and small SIs both work to achieve the same outcome, but with two fundamentally different business models. Smaller SIs that operate within a specific niche lead with their solutions and specialisation, rather than selling people and time. For example, if an SI specialises in SAP in the hyperscale Cloud, that expertise can bring innovation into the process through automation. On the other end of the scale, larger GSIs are more focused on providing people and time. With hundreds of thousands of workers on standby waiting for the next project to come in, larger SIs will look to take control of an entire project, offering everything across the board—and providing a wider service. So, if your business is looking for support across a wider part of the business, larger GSIs may work best. But, if the focus is more pinpointed and innovation is the name of the game, then specialist integrators should be your choice. What type of expertise do you require? First and foremost, you need to trust the company you partner with, and that will likely be determined by their past experience. The difference when it comes to experience is linked to the scope and scale of your problem. Specialist integrators can rapidly spot the problems customers are likely to face and respond with a targeted solution because they have deep expertise in the area. Larger organisations come into their own when a problem requires a wide array of experience, which they can deliver thanks to their broader scope of services, and therefore resources. SYSTEM INTEGRATION www.networkseuropemagazine.com 50Experience and trust are the foundations of any strong partnership with your system integrator – it all comes down to which experience you value more. You need an SI who delivers the level of service needed, without suffering from a one-size-fits-all approach. Do they offer the required flexibility for seamless collaboration? To realise enhanced performance with Cloud technology, organisations, providers, and system integrators need to work as one, and become far more collaborative. Your SI needs to be able to work in tandem with other providers and your internal team. GSIs have a process and model that works very well and standardised training that teaches everyone to do things in the same way. This certainly has its benefits, but when it comes to areas that need a little bit more customisation, a necessity when realising new capabilities through Cloud technology, some GSIs often don’t have the flexibility to adapt. Smaller system integrators, however, tend to be more flexible. They are interested in delivering solutions, not people, so they can be flexible when it comes to implementing the right solution for each customer scenario. Does the cost align with the service? A big part of the decision will usually come down to cost. Typically, GSIs will have higher costs. Having more people on a team will naturally lead to an increase in overhead costs, and the scale of resources will contribute to the final figure as well. Smaller SIs, on the other hand, have their core area of specialism so will be able to give a fair price without the need to add unnecessary contingency. For example, SIs focused on hyperscalers will understand intimately how to secure funding from hyperscalers and their partners to cover much of the costs to do projects such as Cloud migrations. Knowing where you want to spend the majority of your budget will again depend on your originally set out business objectives. Can they sufficiently help mitigate risk? Historically, companies defaulted to a global integrator brand to mitigate the risk associated with complex technology projects. However, more and more organisations are recognising the capabilities and value of smaller SIs in the same space. In some instances, GSIs have hit problems delivering Cloud projects and reducing the subsequent risks, and a specialist SI is called in by either the customer, hyperscaler or even the GSI themselves, to rescue the project. Opting for large GSIs with standardised processes is an appealing choice for organisations, but it's important not to disregard smaller integrators when they can offer just as much value (and safety and security). The scores If you’ve been through all these questions and your chosen system integrator meets requirements and expectations, then they’re clearly a viable candidate. However, if you’re left feeling slightly uncertain, then maybe widen your research and explore SIs you previously omitted from the list. There are benefits in choosing a large global SI or a more specialist, niche integrator, so it all comes down to where your priorities lie, and which group meets your specific requirements. Whichever route you go down, the most important point is that your system integrator is going to be a strategic partner throughout the entire Cloud adoption process, all the way through migration and into operating stages, so you need to be aligned in your approaches. Specialist integrators now provide excellent alternatives because they provide you with a layer of focused expertise, which large integrators may not be able to offer on a flexible resourcing basis. SYSTEM INTEGRATION www.networkseuropemagazine.com 51Why Edge Data Centre Design Should be Different from Everything that Came Before EDGE DATA CENTRES www.networkseuropemagazine.com 52In compute terms, a consensus has been reached on definitions of edge, but agreed definitions of what is an edge data centre are rare and largely depend on who is answering the question. The definitive edge definition remains an ongoing conversation for now. However, the case for edge computing has been made. Adoption drivers such as localised compute and storage, IoT, 5G enabled data, and applications such as autonomous vehicles are well documented. Processing, storage and transport (networking) performance and latency demands dictate that the gap between the data source and the user must shrink. But data at the edge is varied and edge data centre designs will reflect this. Edge data types are completely different from the corporate data for which we designed centralised data centres. The data managed inside enterprise data centres was categorised and put into ‘bases’, ‘lakes’ and ‘warehouses’ - and managed within a fairly standard physical environment composed of adequate white space, with allocated (and often excessive) power and cooling. The edge is changing all that. We are entering a time where a single organisation may operate many different types of distributed edge infrastructure consisting of hundreds or thousands of units. Consider that a single edge environment could contain many dozens of data centres, with power ratings that span a few hundred KWs to 10+ MWs. Now consider that in traditional data centre builds, it remains a struggle to get close to $8 million per MW. To deliver this at the Edge is even more challenging. With edge, an additional big requirement is for remote autonomous operation, monitoring, and non- invasive maintenance. This must be designed in but how can this be achieved? And still, the questions for design engineers continue to mount. What will be the primary power sources for such diverse edge data centre types? What will power back- up look like? How will edge data centres be cooled? Clearly different designs will be needed according to scale. However, an approach based on taking existing power and cooling infrastructure and trying to shrink it (while aiming for a vague Tier III type resiliency) will not suffice. Yet in many cases, edge discussions often arrive at: "We’ll design them like we always have done." This approach is both short-sighted and impractical. The fact is that for edge data centres to work efficiently something quite technically clever must be achieved. Some of this will involve using standard components and not overdesigning. At the smaller scale end, the big infrastructure manufacturers are creating production line-built units using standard components and repeatable processes. This makes sense from a cost-of-production angle. There may be a certain amount of flexibility in terms of specification, but even big manufacturers can’t afford too many design iterations due to time constraints and speed of deployment requirements. Some manufacturers are standardising components and form factors in an effort to scale out economically. The question for the edge data centre industry is: “Can I drop the same edge data centre in any place?” Even with wide thermal envelopes and the exciting developments happening on the electrical side, we don’t think we’re there yet. The need is for non-site- specific repeatable building block designs that are indifferent to site size across regional locations. A 10+MW edge data centre in a container or purpose-built shell, sitting on a concrete base beside a processing plant or remote RER (renewable energy resource) is of a completely different design to a 500KW turnkey data centre unit that may be deployed by a 5G network operator or across the industrial production facilities of a major manufacturer. The good news is there is plenty of scope yet for design innovation at both ends of the scale, but first, we must avoid the obvious trap. Shrink to fit is not an answer that will achieve great edge data centre designs with the best power and cooling technical solutions at affordable Capex and Opex. Edge market growth forecasts are good, but before they get off the drawing board, designers need to acknowledge that edge data centres will have to cope in a heady mix of volatile and hostile environments. Digital infrastructure that is placed physically at the edge will be far from standard. Emerging markets such as edge computing present many exciting opportunities but also significant technical and engineering challenges. Ed Ansett Chairman and Founder i3 Solutions Group ntre ifferent t EDGE DATA CENTRES www.networkseuropemagazine.com 53Contrary to what one might believe, the hybrid data centre is not just one building but rather a platform of smart, networked/interconnected buildings with on- ramps to public and private cloud operators, and on- premise equipment across one or more geographies. HYBRID DATA CENTRES www.networkseuropemagazine.com 54The hybrid data centre is a smart solution for organisations seeking to de-risk their digital strategy for their most valuable data, allowing them to step away from managing the space, maintenance and connectivity of their infrastructure to concentrate on their business. It affords users the agility to allow their data and associated application workflows to move as needed based on their digital strategy. What’s more, it uses advanced design architecture, not only for the physical buildings themselves but also to align with customers' needs for agile growth for today and the future. Around the world, data is generated in different environments continuously, and hybrid data centres make it easy for clients to move data around to suit their needs. You can think of a hybrid data centre like a train station platform, where passengers - the data - change trains on their commute. As the hybrid data centre is interconnected and houses public, and private cloud, and the customers’ on-premise, if the data has to ‘catch another train’, or ‘go to a different platform’, it can. But what should clients be thinking about when looking for a hybrid data centre partner? There are three main considerations: resiliency, robustness and sustainability. Resilient infrastructure and location The first is resilient infrastructure in the digital gateway market, including proximity to the digital markets; the ability to provide multiple buildings within these digital gateway markets is critical. Customers should look for the right coverage for these markets where the cloud and the enterprise and the private are all aggregating - it is imperative they choose a hybrid data centre operator that is able to provide for aggregated public cloud customers, private cloud organisations and enterprises aggregated with their own cloud data. They should look for dynamic, scalable infrastructure, and a vendor able to optimise technical performance and guarantee close proximity to, or location within, digital gateway markets. While the need for short physical distances is no longer necessary due to legacy latency challenges, it’s also true that it is more efficient and environmentally friendly for enterprises to access and process data as close as possible to the source where it’s generated. Robust ecosystem The second is finding an established, trusted provider with a track record of superior service and operational excellence. Robust ecosystems are not built overnight, but over time, leading providers will become evident. That is because public and private cloud operators are likely to invest in sites near the big hybrid data centre operators, because the cloud operators will also want to serve the group of private customers in the portfolio, and proximity will be something they look for too. Another sign of a robust ecosystem operator is one that can offer cloud adjacency, whereby an enterprise customer is as close to a public or private cloud as possible - not just in the same city but within the same hall of a building or across the street on campus. Commitment to sustainability The third is choosing a partner with a strong commitment to environmental, social and governance (ESG). The best hybrid data centre providers continually develop and deploy innovative technologies without sacrificing resiliency - they are always investing in improvements for the future, with sustainability at the heart of those plans. Enterprises can find out about a provider’s ESG strategy via their annual sustainability reports, but it is also a good idea to find out about what a vendor is doing to care for employees and local communities' on-site visits. That way they aren’t just ticking the boxes on technical features, but are also able to understand what the company - and their supply chain partners - are doing to work towards a more sustainable future so that they can ensure company values are aligned and reassure investors asking questions, of that fact too. The future With hybrid data centres typically housing many of the world’s largest organisations’ most critical infrastructure, it is vital enterprises keep these three considerations top in mind to select the best hybrid data centre provider to suit them. They should remember that the leading hybrid data centre providers will work closely with them, designing based on unique needs, applications and workflows to move seamlessly to support their business as it adapts to continuous change. As an industry, we are seeing change more rapidly than we’ve ever seen before, and the speed at which enterprise customers are evolving means hybrid data centre providers have to transform at the same pace. Enterprises should therefore not only choose vendors that have the ability to provide resilient and sustainable operations together with robust ecosystems in the present but also look at how they offer agility at scale so they eliminate pain points and are set for growth in 2023 and beyond. Caroline Caldarera Assistant Vice President - Enterprise Sales CyrusOne HYBRID DATA CENTRES www.networkseuropemagazine.com 55This failure to understand data, what a business has, and what a business needs, is compromising far too many digital transformation plans, and leading businesses to waste years on projects that ultimately, will never deliver. Instead, Peter Ruffley, CEO, Zizo, emphasises the importance of the ‘get started, learn fast’ model – by going through the data lifecycle in order to understand what data a business has today, what valuable insight can be immediately leveraged, and then building on that foundation to drive the digital transformation process. Digital Transformation Paralysis One of the biggest issues facing companies of all sizes is a complete lack of knowledge – or honesty – about current data resources. Don’t assume, for example, that data is being regularly collected as stated; or that customer files are up-to-date and accurate. The quality of data that an organisation can function on is much lower than the standard required for digital transformation. Therefore, that is a fast track to expensive mistakes and wasted endeavour. The catalyst for a business to embark on a digital transformation journey is having a desire to ‘change something’. But after spending months, even years, to determine short, medium and long-term business goals – it is only later that the teams discover that the data required to support this change has not been collected. Businesses' digital transformation journeys will fail before they begin. Peter Ruffley CEO Zizo Data is the key to transforming any business. Accelerated by Covid-19, businesses have started to become more data-driven and embrace digital transformation strategies. However, these often become wasted endeavours as organisations are not brave or honest enough to look at their existing data resources – and realise what they have been missing from the start: the right data. Getting The Data Lifecycle Right to Accelerate Digital Transformation Strategies DIGITAL TRANSFORMATION www.networkseuropemagazine.com 56g The Data Lifecycle to Accelerate Digital formation Strategies DIGITAL TRANSFORMATION www.networkseuropemagazine.com 57A ‘data-first’ approach turns the model on its head. By understanding the existing data resources first, organisations can then drive effective change and unlock immediate value – only then will they be able to explore the real opportunities they have to meet needs and realise ambitions. Businesses need to get the foundations right – having the right quality of data, and it being available at the right time. Additionally, changes in personnel over time can put a halt to the digital transformation journey. Such initiatives are often driven by specific individuals from within the organisation, but these cannot be sustained if those originally inspiring changes are no longer within the business. To make a success of the digital transformation journey, businesses have got to start this process quickly to ensure that the same people with the same impetus are running the process, or else efforts will be wasted. This speed will also ensure that the business can achieve change quicker, and in turn, inspire broader business commitment by encouraging employees to recognise quality data as a vital contributor to the firm’s success. A different approach is needed for digital transformation to ensure businesses succeed. They need to go through the four stages of the data lifecycle to understand what data they have, how they can use it and, if necessary, make the decision to take corrective action on the data – rather than pressing ahead towards inevitable failure. Collect It can appear simple to collect data but, as far too many companies have discovered, there is a huge difference between any data and the right data. Without the right approach, businesses can end up either collecting too much (or too little) data or, in the worst scenarios, collecting the wrong data. Data quality is also vital if business users are to trust the information to make key decisions. What is the point of collecting ‘free text’ information with inconsistent spelling or missing postcodes, for example? That data is guaranteed to be of insufficient quality to use in a digital context. Without collecting the right, usable data from the outset, businesses risk compromising the entire data lifecycle – and derailing digital transformation initiatives as a result. Robust data collection processes look closely at the ‘how, where and what’ to ensure the correct data DIGITAL TRANSFORMATION www.networkseuropemagazine.com 58is in place, and use expert data validation to determine the quality of data before moving to the next stage of the data lifecycle. Combine Organisations of all sizes are often data-rich, but insight-poor: there is a huge gap between creating an extensive data resource and actually unlocking real business value. Single sources of information can be interesting, but the true business picture can only be revealed by combining multiple data sources. What information is required by the business? Which data sources can be combined to reveal vital business insights? And what is the best approach to combining data to ensure the right information is produced? Combining data is a complex process. There is a myriad of tools and solutions available, but different data sources and different data structures make this a complex process. Failure to understand the implications of different data constraints – such as inconsistent data – can, again, derail the process and undermine data confidence. Context After the collection and combination stages of the data lifecycle, the context stage is fundamental for business growth and making effective change happen. Data may have intrinsic value, but its only true value to the business is the information it provides. Therefore, contextualisation is crucial in order to create this information and deliver actionable insights, in turn, enabling intelligent decision-making. Without an effective data model, there can’t be a clear vision of how to add that context, whether it is business or operational. The ability to present that data as information to the right people and deliver real insights from the data will not succeed. This can be particularly difficult for small-medium businesses (SMBs), because this is an analytical process that requires specific skills – skills that may be lacking in-house. Working with an independent data expert can help businesses to understand their data, and by applying algorithms derived from Machine Learning and Artificial Intelligence to produce insights, organisations can derive value from the data more quickly and benefit from the insights produced. Change The most critical aspect of the data lifecycle (collect, combine, context, change) is to remember that it is a ‘cycle,’ and not a finite process. While businesses undertake each of these stages, changes may occur or need to take place, to make the cycle, and end results, more effective. For example, if the business requires more data to understand how a particular operation is achieved, changes need to be made in the ‘data collection’ stage. It is important to remain agile and flexible throughout the process, learning from business findings in each stage, and identifying the business areas that need improvement. This is a continually evolving cycle, and businesses need to repeat and change where necessary. Conclusion Data is the essential ingredient in the digital transformation journey, and in order to be successful, it is crucial that businesses have an appropriate strategy in place to get their data right. By going through the data lifecycle, making changes where necessary, and leveraging insights from new analytics, businesses can become data-driven, making better-informed decisions, which in turn, will act as a catalyst to accelerate the digital transformation journey. DIGITAL TRANSFORMATION www.networkseuropemagazine.com 59Next >