Pinecone Working with AWS to Solve Generative AI Hallucination Challenges
NEW YORK, Sept. 13, 2023 — Pinecone, the vector database company providing long-term memory for artificial intelligence (AI), announced an integration with Amazon Bedrock, a fully managed service from Amazon Web Services (AWS) for building GenAI applications. The announcement means customers can now drastically reduce hallucinations and accelerate the go-to-market of Generative AI (GenAI) applications such as chatbots, assistants, and agents.
The Pinecone vector database is a key component of the AI tech stack, helping companies solve one of the biggest challenges in deploying GenAI solutions — hallucinations — by allowing them to store, search, and find the most relevant and up-to-date information from company data and send that context to Large Language Models (LLMs) with every query. This workflow is called Retrieval Augmented Generation (RAG), and with Pinecone, it aids in providing relevant, accurate, and fast responses from search or GenAI applications to end users.
With Amazon Bedrock, the serverless platform lets users select and customize the right models for their needs, then effortlessly integrate and deploy them using popular AWS services such as Amazon SageMaker.
Pinecone’s integration with Amazon Bedrock allows developers to quickly and effortlessly build streamlined, factual GenAI applications that combine Pinecone’s ease of use, performance, cost-efficiency, and scalability with their LLM of choice. Pinecone’s enterprise-grade security and its availability on the AWS Marketplace allow developers in enterprises to bring these GenAI solutions to market significantly faster.
“We’ve already seen a large number of AWS customers adopting Pinecone,” said Edo Liberty, Founder & CEO of Pinecone. “This integration opens the doors to even more developers who need to ship reliable and scalable GenAI applications… yesterday.”
“With generative AI, customers have the ability to reimagine their applications, create entirely new customer experiences, and improve overall productivity,” said Atul Deo, general manager, Amazon Bedrock at AWS. “Latest personalization techniques like Retrieval Augmented Generation (RAG) have the ability to deliver more accurate generative AI responses that make the most of pre-existing knowledge but can also process and consolidate that knowledge to create unique, context-aware answers, instructions, or explanations in human-like language rather than just summarizing the retrieved data. This integration of Amazon Bedrock and Pinecone will help customers streamline their generative AI application development process by helping deliver relevant responses.”
“We have AI applications in AWS and tens of billions of vector embeddings in Pinecone,” said Samee Zahid, Director of Engineering, Chipper Cash. “Connecting the two in a simple, serverless API is a game-changer for our development velocity.”
The integration will be generally available by the end of the 4th quarter for all Amazon Bedrock and Pinecone users. Review the latest blog post for more information on the integration.
Pinecone created the vector database, which acts as the long-term memory for AI models and is a core infrastructure component for AI-powered applications. The managed service lets engineers build fast and scalable applications that use embeddings from AI models, and get them into production sooner. Pinecone recently raised $100M in Series B funding at a $750M valuation. The funding round was led by Andreessen Horowitz, with participation from ICONIQ Growth and previous investors Menlo Ventures and Wing Venture Capital. Pinecone operates in San Francisco, New York, and Tel Aviv.