Follow Datanami:
May 10, 2024

Databricks Announces Major Updates to Its AI Suite to Boost AI Model Accuracy

Last year in December, Databricks, a leading provider of data intelligence and AI solutions, announced a new suite of tools to get GenAI applications to production using Retrieval Augmented Generation (RAG). Since then, we have witnessed a rapid rise in RAG applications as enterprises are investing heavily in building GenAI applications. 

Traditional language models come with a unique set of challenges, including their tendency to “hallucinate”, lack of access to critical information beyond their training datasets, and the inability to incorporate real-time data. RAG steps in as a solution to some of these issues by combining its retrieval capabilities with its ability to generate natural language. 

To help make it easy for enterprises to build high-quality RAG applications,  Databricks has announced several updates to its platform, including the general availability of Vector Search for quick and accurate retrieval of relevant information. 

Model Serving, Databricks’ environment for developing and managing AI and ML models, has also been updated to offer a more intuitive UI, support for additional LLMs, performance improvements, and better governance and auditability.

Databricks is known as a data lakehouse pioneer, seamlessly integrates the structured data management functionalities of a data warehouse with the unstructured data management capabilities of a data lake. Recently, the company has been focusing on strategic expansion, with a new partnership with Tableau to enable more seamless and secure data interaction, and expanded collaboration with NVIDIA to accelerate data and AI workloads. 

“Developers spend an inordinate amount of time and effort to ensure that the output of AI applications is accurate, safe, and governed before making it available to their customers and often cite accuracy and quality as the biggest blockers to unlocking the value of these exciting new technologies.” shared Databricks in a blog post

According to Databricks, LLM developers have traditionally focused on providing the highest quality baseline reasoning and knowledge capabilities, however, recent research shows that this is one of many determinants of the overall quality of the AI applications. Incorporating a broader enterprise context, establishing proper governance and access controls, and having a deeper understanding of data are some of the other factors that are critical to the quality of the AI application. 

(SomYuZu/Shutterstock)

The new updates to the Databricks platform address some of these concerns by adding more enterprise context and guidance to establish a greater understanding of data.

In addition, the updates offer a more comprehensive approach that covers multiple components through the GenAI process, including data preparation, data retrieval, data training on enterprise data, prompt engineering, and post-processing pipelines.

The addition of vector databases to the Databricks platform will enable training models to accurately understand the unique characteristics of an individual organization to improve retrieval speed, response quality, and accuracy. 

As we navigate through the ever-increasing complexities of AI and chatbots, RAG stands out as a beacon of innovation. With its ability to blend the vast knowledge bases with the precision of retrieve-based information, RAG is poised to transform our interactions with AI. We can expect more enterprises to continue embracing RAG to help them unlock new possibilities in their technological journey. 

Related Items 

Taking GenAI from Good to Great: Retrieval-Augmented Generation and Real-Time Data

Galileo Introduces RAG & Agent Analytics Solution for Better, Faster AI Development

Harnessing Hybrid Intelligence: Balancing AI Models and Human Expertise for Optimal Performance

 

Datanami