Follow Datanami:
June 19, 2019

Towards The Edge: How Innovative New Technologies Are Redefining Data Delivery

Neeraj Murarka

(TAW4/Shutterstock)

It is well known that the network edge is where the critical mass of digital data resides, and where it is collected. Scalability bottlenecks such as power consumption, bandwidth, latency, connectivity and security have led to innovations that are redefining how data is delivered.

It’s no secret that the bulk of all data is produced and aggregated at the edge of a network – yet devices are far from the data centres that are responsible for manipulating, processing and storing this information for further use.

If one considers a ‘hub and spoke’ topology, wherein all nodes are isolated from each other and communicate only with a single central powerhouse, scalability problems rapidly become apparent: even overlooking the fact that a single point of failure exists for the entirety of the network, power consumption, latency and computational requirements will only continue to rise as more devices join.

It isn’t a matter of if, but of when a wave of new devices with increased bandwidth requirements will need to link up with existing networks. As for the time horizon, there isn’t much time to come up with a solution: the number of IoT clients is expected to double in the next four years, as is the data volume of global consumer internet traffic by 2022.

Fortunately, the current constraints on networks have forced providers seeking high-performance infrastructures to get creative with emerging technologies like distributed networks and edge computing. Instead of reinventing the wheel that is the cloud computing model, new advances would integrate into existing networks to push data (and the means to process it) closer to the users.

A Primer on Edge Computing

At present, though the average user may have a myriad of devices in their homes (smartphones, consoles, tablets, PCs), the fact remains that little is actually being done on these: instead, they phone home to a plethora of servers that receive and distribute content. Things have come a long way from the days of merely requesting static HTML pages, with 4K streaming and massive file uploads/downloads increasingly being adopted around the globe.

(BeeBright/Shutterstock)

It stands to reason that streaming a high-quality video from a server within your immediate proximity would be vastly more effective than streaming that same video from a different continent. By extension, if a handful of individuals in that same proximity were to stream the video from the local server, not only would they enjoy less latency, but the burden on the remote server would be reduced.

This is the concept of edge computing in a nutshell – the operations once performed by centralised data centers are conducted either on end-user devices, or in ‘mini’ data centers nearby.

To illustrate how this might work in the case of enterprise IoT, consider a situation where a business has hundreds of sensors in an assembly line. Normally, these would need to package data and constantly stream it back to a distant server, but with edge computing, they can instead point to a local ‘node’, which can sort, process and return data (only uploading what is strictly necessary to the server). There are a handful of benefits to this:

Lower latency: as touched on above, the short distance between clients and servers makes for faster interactions, catering to use cases where regular cloud computing simply cannot be deployed. As a knock-on effect of computing, sorting and discarding data locally, the node will have more bandwidth to communicate with the data center.

Higher availability: edge nodes put control back in the hands of the businesses that oversee them, ensuring that an outage of a large data center does not hinder the activities of the companies using them.

Greater Privacy/Security: if the past few years have taught us anything, it’s that entrusting third parties with sensitive data is a recipe for disaster – cybercriminals know that, by breaching a given cloud provider’s servers, they can gain access to anywhere from thousands to millions of logins and confidential documents. Though passwords may provide a veil of security or privacy, the fact is that the party storing them is privy to any of their users’ activities.

This can be mitigated with sensitive data being kept from third parties, whether by ensuring it never leaves the edge, or by encrypting it at the edge and storing the encrypted information with the third party.

A Case Study: Gaming

When it comes to the multibillion dollar gaming industry, there are huge benefits to be gained from leveraging edge computing to enhance multiplayer offerings: as anyone familiar with online gaming can attest to, even split-second delays can ruin the experience altogether. High bandwidth and low latency are of paramount importance to the enjoyment of multiplayer titles.

A multiplayer lobby will often connect players with peers on their continent – clients render the environment, physics and other decorations on their PCs/consoles, but their keystrokes and commands are relayed to the server in order to update their current state to be broadcast to other players.

With edge computing, this mechanism can be made even more efficient so as to reduce lag: as with other applications, if the server is closer to the players, there’s less latency. Some have set their sights higher, and believe that edge computing will mark the end of consoles altogether.

The concept of ‘cloud gaming’ has been around for awhile, but it hasn’t yet materialised into anything significant: gamers still need end devices to render graphics. While the promise was made of console-less, high-performance gaming with OnLive years ago, it was never truly realised as the architecture wasn’t there.

However, a new wave of services seem poised to achieve what OnLive could not – industry heavyweights like Google and Microsoft have begun offering their own cloud gaming platforms (Stadia and xCloud, respectively). By giving users the ability to connect to servers closest to them from low-spec devices, any smartphone or PC owner will soon be able to stream titles as they would a YouTube video.

A Data-Driven Future

Gaming is just the tip of the iceberg. With the tech industry slowly transforming every facet of day-to-day life, and with interconnectivity playing a key role in this change, it’s imperative that the infrastructure for distributing data keeps up with throughput requirements to ensure near-instant and around-the-clock availability of critical information.

Indeed, as we progress towards a future dominated by self-driving cars, artificial intelligence and a myriad of advances in AR/VR/IoT, the importance of distributed networks powered by edge computing will continue to manifest itself. Alongside complementary technologies like 5G and the existing cloud ecosystem, edge computing is set to radically alter the way in which data is propagated.

About The Author: Neeraj Murarka is a scientist and technology entrepreneur, and is also co-founder and CTO of Bluzelle Networks. Bluzelle is a global data cache, using decentralized technologies to empower the world. Neeraj’s specializations are in decentralized trustless computer systems, cryptocurrencies, blockchains, network security, databases, and computer systems architecture. He built the Android version of Zeroblock, an industry-leading blockchain news aggregation app. Neeraj also led the architecture and development of in-flight entertainment systems for Lufthansa in collaboration with Panasonic Avionics. He originally hails from the video game industry, having worked as a senior engineer for Mattel interactive, Mindscape Entertainment, Zynga, and Sierra Online.

Related Items:

Exploring Artificial Intelligence at the Edge

Why Data Scientists Should Consider Adding ‘IoT Expert’ to Their List of Skills

Harvesting IoT Insights, from Edge to Core

 

 

Datanami