Follow Datanami:
November 13, 2020

When it Comes to Data Transfer, 5G is Just the Beginning

Pete Brey

(Iaremenko Sergii/Shutterstock)

If ever there was a technology tailor made for the world we currently live in, it’s 5G. Everything we do seems based on the need for speed and connectivity. High bandwidth and low latency enables hospital employees working in remote ICUs to communicate with, and quickly send information back to, their main campuses. 5G will also be invaluable in smart cities with densely packed networks of devices that need to communicate and share information in real-time. Then, there are the more everyday tasks that power our lives– a video conference here, a media streaming break there.

But while 5G has the potential to be the engine that moves all of the various bits and bytes around in these examples, what really happens with those bits and bytes? How do we take advantage of that 5G infrastructure?

Moving Data Faster

The answers to those questions lie in how the data is processed as it moves across the 5G network. Organizations will need an intelligent data services architecture that enables them to access and transfer data to and from multiple sources. Ideally, this architecture will consist of an automated data pipeline that connects edge and core locations and runs over a flexible and open infrastructure supporting multiple clouds. Underneath all of this will be the 5G network that propels data movement between points A and B and, if necessary, to multiple other points.

The combination of an automated data pipeline and flexible serverless cloud computing infrastructure is ideal for the kinds of data intensive use cases that 5G is meant to support. Made popular by the Kubernetes-based platform and open source project Knative, serverless computing does double-duty as a means of accelerating application development and supporting large-scale data workloads without running resources full-time.

(Maha Heang 245789/Shutterstock)

Serverless includes a feature called “eventing,” where events are triggered as data flows into the workstream and across the data pipeline. In that pipeline, data is placed into unique storage buckets, where it is analyzed in real time as it is ingested (event number one). This solves the nagging data locality problem, where data is located close to compute. This helps eliminate the need to batch data at an edge location to send to the core. Depending on the findings, that data will then immediately be placed into another bucket (event number two), and then another (event number three), and so on until it reaches its final destination.

Delivering Critical Information in an Emergency

To illustrate, let’s consider a potential use case that is probably all too familiar to some of us. Imagine a hurricane has just hit the coast, creating devastation for miles. An insurance adjuster is on the ground at the edge of the network responding to the disaster.

The photos and video she’s taking are automatically transferred back to a remote office over 5G. Before they go, they’re subject to inferencing for real-time processing and analysis. The initial results are relayed to the core location for deeper machine learning and training, and the findings are pushed back to the edge. As the models become more intelligent, the system will be able to use image recognition at the edge location to automatically create and deliver estimates.

All of this can happen very quickly thanks to the serverless nature of the infrastructure and the automated pipelines. Add in 5G, and transfer speeds can be extraordinarily fast–in near real-time.

Moving Only What’s Necessary

All of that said, moving data is always less than ideal. It can sap productivity and cost a lot of money, especially when the information runs into the terabyte range and beyond. Even with 5G, this type of data transfer will continue to be expensive from both a financial and productivity angle.

(emojoez/Shutterstock)

Instead, organizations may opt to simply move a subset of their data. This is particularly true in industries that are subject to regulatory restrictions. For example, automobile manufacturers are required to keep certain sets of video data collected by their autonomous vehicles. They can transfer only that data from the edge to the core and discard locally the remaining information they’re not required to keep.

These types of organizations will want to consider implementing platforms that support high-performance data streaming and fast exchange of high data volumes. They’ll need to be able to categorize, tag, and potentially store the data after it’s been ingested. Developers working on the open source Kafka project have done some exceptional work in this area.

Bringing 5G Home

5G will be an economic game changer by offering a communications infrastructure that provides a high-speed and reliable way to achieve true data availability from anywhere. Moving data from exponentially more devices to process at the network edge will allow 5G to improve efficiency in agriculture, energy, manufacturing and many other industries. Reducing the latency in data movement will bring new opportunities for improving health and safety. And moving more data, faster will enhance experiences like streaming, conferencing and augmented reality on the go.

But, the mobile network is only the beginning. Organizations will need the proper infrastructure and intelligent data services architecture to bring the value of 5G home–whether “home” means the edge, the core, or both.

About the author: Pete Brey is marketing manager of hybrid cloud object storage at Red Hat, including Red Hat Ceph Storage and Red Hat data analytics infrastructure solution.

Related Items:

Are You Prepared for the 5G Data Crush?

5G is Driving the Acceleration of Customer Expectations

How 5G Will Serve AI and Vice Versa

 

Datanami