Follow Datanami:
August 31, 2022

Run:ai Announces First Hybrid Cloud Software Solution for AI Workloads

TEL AVIV, Israel, Aug. 31, 2022 — Run:ai, a leader in compute orchestration for AI workloads, today announced that their Atlas Platform is the first to support hybrid cloud and multi-cloud AI Infrastructure. Run:ai’s centralized monitoring and control panel provides a unified and consistent user experience to manage resources running on different locations including on-prem and in the cloud. With Run:ai, organizations can easily take advantage of adopting a multi-cloud strategy avoiding unplanned downtime, boosting compute availability, and controlling costs.

“Using several cloud service providers or a combination of on-prem and cloud to manage infrastructure is the goal for most organizations but the challenges can be daunting”, said Ronen Dar, co-founder and CTO of Run:ai. “Companies can underestimate the time and effort it takes to abstract infrastructure and migrate workloads to different clouds. Provider lock-in happens early and it can take months to train IT and DevOps teams on every environment. The lack of centralized monitoring also means that users must work with different tools to manage multiple clusters across multiple clouds – which differing price models further complicate.”

Run:ai’s Atlas now provides a unified user experience through full abstraction so researchers can keep using each cloud provider’s managed Kubernetes platform and leverage the best of every CSP’s offering. Researchers can keep using their framework of choice and favorite development tools. Run:ai’s Control Plane is a single pane of glass, with centralized & multi-tenant management of resources, utilization, health and performance across any aspect of the AI pipeline, no matter where the workloads are run. Run:ai also removes GPU configuration limitations, allowing teams to split GPUs into fractions for smaller inference workloads.

Many organizations are also seeking a hybrid cloud architecture to keep their most sensitive data on-prem – where costs might be lower and performance better – while still leveraging the benefits of the cloud, such as availability and scalability.

“With Run:ai, an AI healthcare company training models, for example, can keep their sensitive patient data on-prem, and once the model is trained, they can seamlessly move to the cloud to deploy to a customer,” added Dar. “Run:ai helps companies transition easily to a hybrid-cloud strategy and get the best of both worlds.”

About Run:ai

Run:ai‘s Atlas Platform brings cloud-like simplicity to AI resource management – providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system – which includes a workload-aware scheduler and an abstraction layer – helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud.


Source: Run:ai

Datanami