Design in latency at the start to enhance application performance and user experiences

Design in latency at the start to enhance application performance and user experiences

David Coffey, Chief Product Officer, NS1 discusses why low latency must be regarded as a feature that is built into an organisation’s infrastructure from the start as superior performance is critical to the end-user experience. READ MORE…
Design in latency at the start to enhance application performance and user experiences

 

By David Coffey, Chief Product Officer, NS1

https://ns1.com

The internet is exponentially more complex and distributed now than it was even five years ago. The number of internet-connected devices is increasing by thousands every minute. Because of this, it has become even more difficult to consistently deliver applications in a secure, reliable, and fast way. These applications have become the backbone of how we work, learn and stay connected in a digital age. Creating a consistent approach to delivering these applications in a secure, reliable, and fast way is a key problem to solve.

Users expect exceptional application experiences, which makes consistent application performance critical, but complicated for organisations to deliver well. In today’s environment, latency can no longer be treated as an issue to solve once all the development work has been completed, or something to deal with as a company rolls out into production. Instead, organisations need a much better understanding of latency and the intricate technology and processes that impact it. It should be regarded as a tier one feature, fundamental to the success of an application and the experiences that audiences have as they interact with that application. But to better understand how to solve for latency, it is important to first consider how the application delivery landscape has changed.

Change #1: Application Audiences Are More Distributed

First, audiences are more geographically distributed than ever before. Application delivery environments need to cover more territory, along with a greater variety of connectivity providers, making it extremely difficult to control for low latency.

While the trend towards remote work was already growing, the pandemic has accelerated it, and employees are now working permanently on a remote basis. If they are not remote, they are heading into corporate headquarters or branch offices less frequently due to new flexible work approaches. Customers are accessing applications and websites from smartphones on cellular networks, on residential networks, and on the move. The ease with which they can do this only increases usage and tests the boundaries of performance mechanisms in place.

Thousands of new devices of every variety connect to the internet every second, and many of those are accessing corporate networks. This creates network congestion, increasingly necessitates sophisticated routing and makes it even more challenging to ensure low latency and performant delivery.

Change #2: Application Delivery Environments Have Become More Complex

In response to this and other trends, networking infrastructure has grown more complex. Gone are the days of a centrally managed data centre and physical application stacks. Today, companies rely on private and public cloud, on-prem, multiple content delivery networks, serverless architecture, containers and so on.

Pushing infrastructure closer to end-users helps control for latency in some respects; however, increased complexity without automation also increases the risk for outages and issues, as human errors or bugs are magnified. Complex applications and infrastructure rely on automation, and performance teams need to rely on tools that are also automated. 

There are also more factors at play that can impact performance, which are outside an organisation’s control. Their cloud or CDN provider can experience an outage — whether due to a DDoS attack or unintentional error — and can in turn take down the company’s website or application unless they have invested in redundancy within their infrastructure. The device or network that is being used to access the application may have issues of its own, for example, a slow home WiFi network or a poor mobile connection.

Change #3: Operating Models Have Evolved

Given the changes above, operating models have had to shift as well. Without software-defined infrastructure, automation and DevOps practices that can speed up integration and continuous delivery pipelines, it’s nearly impossible for organisations to manually and efficiently manage such complex, dynamic systems. Now companies are leveraging automation through infrastructure as code to deploy and manage applications on the public cloud. They also use automation to manage their load balancing, networks and network segmentation, and the constraints of data and data encryption management. To reach their more distributed audiences, they must then scale all of this out geographically to the edge and utilise observability to extract accurate business intelligence. 

Visibility of an entire environment is critical to ensuring reliable and fast application delivery. Yet collecting, measuring, and analysing data on the performance of an entire application delivery environment has grown incredibly complicated. Without API-first infrastructure, it is nearly impossible to sift through all the data from the different technology layers of an organisation, draw conclusions from it and take action within a reasonable time frame.

A New Way to Approach Low Latency in Applications

Companies need to create the kind of environments that prepare for application, device, and internet shortcomings and ensure consistent, superior experiences are delivered to users. They need to understand how to measure their environment and how to incorporate latency improvements by design. While they can control the internal multi-layered approach, they also need to account for what is happening in the broader internet environment that can affect the performance of their applications. Combining this knowledge with data on the internal application stack will help them to understand how to improve performance and enhance experiences. 

They also need to consider how they support the continuum of infrastructure that applications span — from serverless platforms to public cloud infrastructure and data centres. The continuum has a variety of factors that may impact performance, but users have expectations that must be met regardless. This is why edge networking has become so important. Routing data and network traffic in a more optimised way across distributed footprints enable lower latency and it also gives companies more control while allowing them to use existing infrastructure. 

To put this into context, consider the healthcare sector, where finely turned edge networks that account for latency allow medical diagnostic tools to work quickly to produce results that accelerate treatment and diagnosis. Other examples include global content providers, such as Netflix, or collaboration companies like DropBox that are intelligently orchestrating their application traffic across huge, distributed networks to ensure low latency, high-quality content delivery to users. 

Low latency must be regarded as a feature that is built into an organisation’s infrastructure from the start, a top tier one priority for companies to deliver, as superior performance is critical to the end-user experience. Today’s application audiences have no tolerance for delays and will abandon a service provider if they can’t access an application quickly and easily. This means performant application delivery is now paramount to business success.

Related Posts

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.