Follow Datanami:
October 7, 2021

VMware Seeks to Liberate Memory Bottlenecks with Project Capitola

At its VMworld 2021 event this week, VMware introduced Project Capitola, a new company-led effort to use its vSphere software to virtualize application access to various types of memory, from DRAM to PMEM to NVMe. The goal, the company says, is to break the bottleneck that siloed memory currently poses to efforts to scale applications.

Project Capitola is “a software-defined memory implementation that will aggregate tiers of different memory types such as DRAM, PMEM, NVMe, and other future technologies in a cost-effective manner, to deliver a uniform consumption model that is transparent to applications,” VMware executives wrote in a company blog post on Tuesday,

The goal of Project Capitola is to address an emerging divergence in how companies adopt memory, and the ensuring differences that emerge in the performance-cost benefit, VMware Cloud Platform Business Unit executives Dave Morera and Ragav Gopalan write in the blog.

In response to these challenges, companies are building infrastructures with “silos of heterogeneous memory tiers that offer varying performance-cost benefits,” they write. “Often, these systems lead to a divergence in the memory consumption model resulting in the need for software changes in applications.”

VMware seeks to address this dilemma with Project Capitola by providing a uniform way that applications can access all of these memory types, from good old DRAM and flash memory (NVMe) to newer persistent memory (PMEM) technologies, such as Intel Optane. Compute Express Link (CXL), a newly proposed open standard for a PCIe-based interconnect that would connect processors, memory, and accelerators, would also be supported, as would Remote Direct Memory Access (RDMA) Over Ethernet (RoCE).

Project Capitola will streamline how applications access different types of memory (Source: VMware)

According to VMware, these different memory types will be grouped into “logical memory,” where it can be easily consumed by VMware’s vSphere hypervisor. “Tight integration with ESXi memory management ensures that vSphere features such as Distributed Resource Scheduler (DRS) will work seamlessly across new and existing memory tiers,” the VMware execs write.

Project Capitola is currently in in technology preview. The company provided no guidance about when the software will become generally available.

VMware says it’s working with Intel to bring Project Capitola to market on Intel Optane PMEM running on the Xeon processor. The idea is to bring in other vendors in the memory and storage ecosystem, including Micron and Samsung, as well as server vendors like Dell (VMware’s parent company), HPE, Lenovo, and Cisco. MemVerge, which develops software that helps streamline application access to different memory technologies like DRAM and PMEM, is also part of the Project Capitola effort.

Cisco’s Dan Hanson, a CTO with its cloud and compute technology office, stated: “We believe that the better cost control via easy-to-use memory tiering, deployment flexibility with pooling of memory resources, and observability at this level will only enhance our own solutions

HPE’s Krista Satterthwaite, the vice president and general manager of the Mainstream Compute, Compute Business Group, stated: “Project Capitola has the potential to increase flexibility in shared memory and drive better price performance, while eliminating complexity of additional resources.”

Project Capitola was one  of a number of new projects that VMware announced at VMworld 2021. Others include Project Arctic, which seeks “to bring multi-cloud to the fingertips of vSphere customers, by natively integrating cloud connectivity into vSphere.” It also unveiled a technology preview of Project Cascade, which “will provide a unified Kubernetes interface for both on-demand infrastructure (IaaS) and containers (CaaS) across VMware Cloud–available through an open command line interface (CLI), APIs, or a GUI dashboard.”

Related Items:

Micron Ships New Data Center SSDs, Teases DDR5

Filling Persistent Gaps in the ‘Big Memory’ Era

The Past and Future of In-Memory Computing

 

Datanami