Follow Datanami:
March 7, 2023

Databricks Brings ML Serving into the Lakehouse

Databricks customers can now run their machine learning models directly from the vendor’s platform, eliminating the need to manage and maintain separate infrastructure for ML workloads like product recommendations, fraud detection, and chatbots.

Databricks seeks to distinguish itself among the cloud analytic vendors through its ML and AI capabilities and by offering pre-built and on-demand features required for building ML environments, such as data science notebooks, feature stores, model registries, and ML catalogs.

Now you can add Databricks Model Serving to that list of capabilities. By running production ML inference workloads directly from the lakehouse platform, customers benefit through closer integration to the data and model lineage, governance, and monitoring, the company says.

With Databricks Model Serving, the San Francisco vendor promises to scale the underlying infrastructure as needed to account for ML workload demands. That eliminates the need to pay operational staff to manage and scale the infrastructure, whether cloud or on-prem, to account for the increases and decreases in resources required to serve the real-time ML workload.

But more importantly, running the serving workload on the same infrastructure where it was developed reduces the need to integrate disparate systems, such as those that are needed for feature lookups, monitoring, automated deployment, and model retraining, the company says.

“This often results in teams integrating disparate tools, which increases operational complexity and creates maintenance overhead,” Databricks officials write in a blog post today. “Businesses often end up spending more time and resources on infrastructure maintenance instead of integrating ML into their processes.

Vincent Koc, who’s the head of data at hipages Group, is one of the early users of the new offering. “By doing model serving on a unified data and AI platform, we have been able to simplify the ML lifecycle and reduce maintenance overhead,” he states on the Databricks website. “This is enabling us to redirect our efforts toward expanding the use of AI across more of our business.”

The new offering is available now on AWS and Microsoft Azure. Companies can access the ML model managed by Databricks by invoking a REST API. The ML serving component runs in a serverless manner, the company says. Quality and diagnostics capabilities, which will allow the system to automatically capture requests and response in a delta table to monitor and debug models or generate training datasets, will be available soon, the company says.

More information on Databricks Model Serving will be shared during the company’s upcoming ML Virtual Event, which is being held March 14 at 9 a.m. PT.

Related Items:

MIT and Databricks Report Finds Data Management Key to Scaling AI

Databricks Bolsters Governance and Secure Sharing in the Lakehouse

Why the Open Sourcing of Databricks Delta Lake Table Format Is a Big Deal

Datanami