Edge to core to cloud: striking the right data storage balance – Stephane Cardot, Director Pre-Sales EMEA at Quantum

Edge to core to cloud: striking the right data storage balance

The accelerating evolution of cloud computing over recent years, combined with spiralling data volumes, has quickly led to the advent of edge computing. This has become a fast-growing market segment that MarketsandMarkets predicted would grow from $2.8 billion in 2019 to $9 billion by 2024.

A new report from Million Insights has taken this a step further, estimating that the market will be worth $43.4 billion globally in 2027.

By offering organisations a way to compute at or near the source of data, edge computing provides the benefits of the cloud but with a local angle. As well as reducing latency and enhancing security, it also offers significant cost savings over the bandwidth costs associated with the cloud.

With the proliferation of IoT devices, sensors and robots spanning industries like autonomous vehicles, manufacturing and healthcare, edge computing is creating vast amounts of unstructured data that, at some point in its lifecycle, needs to be centralised at the core or moved to the cloud.

This is presenting organisations with some complex data management challenges that must be overcome if they want to use their data in the most efficient way possible. But, thanks to technological advancements, there are several best practices businesses can follow to ensure they strike the right balance when it comes to storing and managing the data they create.

Dealing with data

As the volume of data being collected at the edge has continued to grow, organisations’ storage requirements and priorities have shifted. For example, because these edge sites are now so active, ensuring that the infrastructure is running in a secure environment has taken on newfound importance. Today’s businesses must invest in their ability to secure edge data against ransomware and other cyber attacks. This is driving increased security around the management of data, along with replication capabilities in the core data centre.

Businesses also need to take a smarter approach to data storage to ensure cost-efficiency. With so much data being created, storing the right data in the right way – whether that’s active storage or long-term archiving – is essential to reducing costs when dealing with large data volumes.

These challenges are supplemented by the growing need to apply AI and machine learning capabilities to the raw data as it’s collected at an exponential rate. The need to analyse and learn from this data has never been greater, accelerating the push to centralise unstructured edge data to wherever this compute power resides. This is particularly true in verticals such as manufacturing, automotive and autonomous vehicles – which alone generates unparalleled amounts of sensor, satellite and geospatial data. 

By centralising data in the core and the cloud, businesses can extract more value from their data and ensure that key information is shared across their different offices and teams located around the world. Instead of having multiple offices tied together across a complex and expensive infrastructure, data can be made accessible through one central location.

As well as increasing efficiencies, this boosts security by consolidating data in one data centre at one point in time. It can then be replicated across additional data centres if required. So, how can organisations go about putting this approach into action?

Rethinking data storage

The first consideration for organisations is whether data will be kept on-premise in the core or moved to the cloud. This will depend on business requirements. If data goes directly to the cloud, the organisation will get the scalability and cost-efficiency benefits that come with it. However, there will also be constraints in terms of performance and SLAs. As such, we’re seeing many organisations adopt a hybrid approach that delivers the best of both worlds. Businesses can capitalise on the flexibility of the cloud, while also enjoying the increased security and performance of on-premise for certain types of data.

This then connects to the replication mechanism – technology that understands what type of data is being worked on and where it needs to go. Tools that understand whether data needs to be replicated to the cloud or to the core site can help organisations centralise their data in a more effective way across multiple sites, placing data in the proper storage location to ensure cost-efficiency.

Of course, security must always be in front of our minds. When data is transferred from the edge to the core and the cloud, organisations need a secure connection and protocol to protect against potential attacks. They should also ensure that data is encrypted during transit, providing peace of mind that it won’t fall into the wrong hands.

Finally, take the time to understand how different processes and storage methods impact the organisation’s SLAs. The cheapest storage option might take the longest to retrieve data from, so there must be a balance between cost efficiency and the ability to access the right data when it’s needed.

Ultimately, it’s becoming clear that the rise of edge computing is forcing organisations to reconsider how and where they process, store and manage all the unstructured data being generated. Although there are challenges, centralising data can enable businesses to get the most out of their data in today’s world of ever-increasing data volumes.

Related Posts

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.