Follow Datanami:
July 23, 2021

Containerized Storage with Kubernetes Goes Mainstream in the Large Enterprise

Erik Kaulberg

Released in 2014 as an open-source system for automating the deployment and management of containerized applications, Kubernetes has come a long way in the past seven years. It was first created by Google and then turned over to a vendor-neutral body, the Cloud-Native Computing Foundation (CNCF), to manage it as an open-source project. But we are only now starting to see mature Kubernetes deployments at mainstream enterprises.

As more companies transition from monoliths to microservices, there has been an increase in the use of container technologies. As microservices proliferated, an increase in applications were comprised of hundreds or even thousands of containers, creating a challenge to manage all these applications. A need for orchestration technologies emerged.

Kubernetes is an orchestration tool that helps developers manage containerized applications and manage them in different environments, such as cloud, virtual and physical. Applications are run in isolated user spaces called containers, representing a form of virtualization.

Together, Kubernetes and containers enable application-oriented data centers. Essentially, the application environment is encapsulated by the containers. The focus is on managing applications, rather than the traditional way of managing machines.

Containerized applications are increasingly becoming mainstream services that enterprises want to run alongside other application workloads and services. Container environments are emerging as tier-one environments, alongside VMware environments – in fact, with VMware’s Tanzu portfolio capabilities, containers may well be part of the VMware environment for many large enterprises.

(Dario Lo Presti/Shutterstock)

Organizations with more of a classic open-source inclination tend to focus on Red Hat OpenShift, the dominant commercial Kubernetes distribution. In any case, petabyte scale is becoming a realistic target for leading-edge enterprise Kubernetes deployments.

This all would not be possible without the standardized approach enabled by the Container Storage Interface (CSI), which is a mechanism to manage storage directly within container environments. Released in early 2019, CSI has facilitated the construction of production-level container environments that deliver the core enterprise requirements – stability and predictability – when paired with effective backend storage solutions.

Both the availability of the CSI standard and the VMware Tanzu implementation of Kubernetes have been instrumental in turning an open-source solution that was often considered a “science project” into a viable, robust environment for the real world, just as virtual machines (VMs) are consumed in enterprise environments today. Overall, the realignment around Kubernetes has been critical to drive enterprise adoption of container environments beyond side projects or highly customized environments.

CSI as a Gateway

An effective Kubernetes implementation provides assurance that applications are always accessible by users. An application loads fast, and users get a high response rate. Kubernetes also has emerging backup and restore features and functionality.

But one of the most interesting things about CSI is that it acts as a gateway to expose the true potential of the underlying attached storage. A well-designed CSI driver can help make it easier to bring in advanced storage capabilities, such as scalable snapshots and Neural Cache data placement mechanisms, which are both increasingly becoming of interest to large enterprises as they scale their Kubernetes environments.

(Piotr-Swat/Shutterstock)

A good Kubernetes implementation delivers high availability with no downtime, as well as scalability and disaster recovery. As usage goes up, volumes will need to be scaled on an as-needed basis, so flexible consumption-based purchasing models are a good fit for Kubernetes environments. And attention must always be paid to the economics – both direct cost of infrastructure and ongoing implementation/support costs, which can far outweigh the direct cost of the infrastructure.

Most organizations are ultimately aiming to build their Kubernetes environments into private clouds. Indeed, a centralized private cloud using Kubernetes and CSI keeps control in the hands of the CIO and IT team of a large enterprise – while delivering the power to the developers and DevOps teams to move as the business evolves.

CSI Is Evolving

As Kubernetes features and function are continually being improved, CSI continues to rapidly evolve. However, providing a new release every six weeks yields more churn than value for typical enterprises. As an enterprise storage solutions leader, we do not want to get too far out ahead of the standards, and strive for a balance between regular additions of new functionality and enterprise stability expectations.

Kubernetes will continue to evolve and improve as containers are taking a more prominent place in the enterprise platform stack. Even today though, by becoming the industry standard approach for deploying containers in production, Kubernetes has finally gone mainstream.

About the author: Erik Kaulberg is a vice president at Infinidat, a provider of data storage solutions.

Related Items:

Is Kubernetes Overhyped?

The Biggest Reason Not to Go All In on Kubernetes

The Curious Case of Kubernetes In the Enterprise

Datanami