Excelero Announces AI/ML and Hyperscale Customer Growth
SAN JOSE, Calif., Jan. 28, 2019 – Excelero, a disruptor in software-defined block storage, deployed 3x more new artificial intelligence and machine learning (AI/ML) customers during fiscal year 2019, and the company also grew new hyperscale web customers using its NVMesh Elastic NVMe solution by 3x. Revenue from Excelero’s new deployments represented 70% of its full year total, while 30% came from existing accounts. New customers secured during January 2020 continue the trend.
Excelero’s strong 2019 affirms the appeal of NVMesh as high-performance GPU storage to serve AI/ML and hyperscale applications. With 270% growth in enterprise AI deployments in the past four years, and 154% growth in 2019 alone (Tractica, 2019), data center operators are re-evaluating their storage architectures to ensure that they support the high-throughput and low latency of AI/ML systems and associated transactional databases. Excelero’s NVMesh delivers up to 10x faster data processing for multi-server, multi-GPU compute nodes when working with massive datasets to carry out complex financial analysis, climate modeling and related workloads.
New Excelero AI/ML installations during 2019 include award-winning biotech and pharmaceutical firms and other marquee government-funded research institutions. In the hyperscale storage category, two of Excelero’s Fortune 1000 customers, both of whom have purchased additional NVMesh licenses annually for the past three years, also significantly expanded their deployments in 2019. One new hyperscale deployment that rolled out in 2019 neared 5000 nodes.
“The AI revolution is real, hyperscale deployments are multiplying, and Excelero was fully focused on these key opportunities in 2019,” said Lior Gal, CEO and co-founder of Excelero. “The word is out that when AI/ML deployments run into the GPU storage bottleneck, Excelero can set them right because our elastic NVMe architecture provides more bandwidth and I/Os per second while reducing latency. We’re positioned to capture even more of this high-growth segment in 2020.”
Other 2019 highlights for Excelero include:
Collaboration on NVIDIA Magnum IO – In November NVIDIA debuted NVIDIA Magnum IO, a suite of software that enables dramatically greater data throughput on multi-server, multi-GPU computing nodes than previously possible. NVIDIA developed Magnum IO in close collaboration with industry leaders in networking and storage, including Excelero. The GPUDirect Storage within NVIDIA Magnum IO enables researchers to bypass CPUs when accessing storage and quickly access data files for simulation, analysis or visualization. By combining GPUDirect Storage with NVMesh, users bypass CPUs all the way from GPU memory to NVMe devices, achieving frictionless access to superior NVMe performance for shared NVMe storage at local speed.
More strategic partnerships – with its 100% channel sales model, Excelero continually is expanding its partner network, and in 2019 added strategic relationships with Lenovo, Penguin, and QCT.
Subscription sales model – instead of perpetual licensing, Excelero adopted a subscription sales model in 2019, making it even easier to buy and facilitate partner services.
More awards and patents including a FMS Best of Show 2019 for the Most Innovative Flash Memory Technology of the year; an International Business Awards Gold Medal Stevie Award, finalist in SearchStorage’s 2019 Products of the Year competition in the storage system and application software category, and a berth on the 2019 Entrepreneur 360 list for the third year in a row. During 2019 Excelero also was awarded its third and fourth patent for heightened shared NVMe storage efficiency and an approach to “tail latency,” with 11 more patents pending.
Excelero delivers low‐latency distributed block storage for hyperscale applications such as AI, machine learning and GPU computing, in the Cloud and on the Edge. Founded in 2014 by a team of storage veterans and inspired by the Tech Giants’ shared‐nothing architectures for web‐scale applications, the company has designed a software‐defined block storage solution that meets the low‐latency performance and scalability requirements of the largest web‐scale and enterprise applications.