Follow Datanami:
January 16, 2024

Pinecone Unveils Serverless Vector Database for Enhanced AI Applications

NEW YORK, Jan. 16, 2024 — Pinecone has announced a new vector database that lets companies build more knowledgeable AI applications: Pinecone serverless. Multiple innovations including a first-of-its-kind architecture and a truly serverless experience deliver up to 50x cost reductions and eliminate infrastructure hassles, allowing companies to bring remarkably better GenAI applications to market faster.

One of the keys to success is providing large amounts of data on-demand to the Large Language Models (LLMs) inside GenAI applications. Research from Pinecone found that simply making more data available for context retrieval reduces the frequency of unhelpful answers from GPT-4 by 50%, even on information it was trained on. The effect is even greater for questions related to private company data. Additionally, the research found the same level of answer quality can be achieved with other LLMs, as long as enough data is made available. This means companies can significantly improve the quality of their GenAI applications and have a choice of LLMs just by making more data (or “knowledge”) available to the LLM. Yet storing and searching through sufficient amounts of vector data on-demand can be prohibitively expensive even with a purpose-built vector database, and practically impossible using relational or NoSQL databases.

Pinecone serverless is an industry-changing vector database that lets companies add practically unlimited knowledge to their GenAI applications. Since it is truly serverless, it completely eliminates the need for developers to provision or manage infrastructure and allows them to build GenAI applications more easily and bring them to market much faster. As a result, developers with use cases of any size can build more reliable, effective, and impactful GenAI applications with any LLM of their choice, leading to an imminent wave of incredible GenAI applications reaching the market. This wave has already started with companies like Notion, CS Disco, Gong and over a hundred others already using Pinecone Serverless.

“To make our newest Notion AI products available to tens of millions of users worldwide we needed to support RAG over billions of documents while meeting strict performance, security, cost, and operational requirements,” said Akshay Kothari, Co-Founder of Notion. “This simply wouldn’t be possible without Pinecone.”

Key innovations in the breakthrough architecture of Pinecone Serverless include:

  • Separation of reads, writes, and storage significantly reduces costs for all types and sizes of workloads.
  • Industry-first architecture with vector clustering on top of blob storage provides low-latency, always fresh vector search over practically unlimited data sizes at a low cost.
  • Industry-first indexing and retrieval algorithms built from scratch to enable fast and memory-efficient vector search from blob storage without sacrificing retrieval quality.
  • Multi-tenant compute layer provides a powerful and efficient retrieval for thousands of users, on demand. This enables a serverless experience in which developers don’t need to provision, manage, or even think about infrastructure, as well as usage-based billing that lets companies pay only for what they use.

“From the beginning, our mission has been to help every developer build remarkably better applications through the magic of vector search,” said Edo Liberty, Founder & CEO of Pinecone. “After creating the first and today’s most popular vector database, we’re taking another leap forward in making the vector database even more affordable and completely hassle-free.”

To extend the ease of use that made Pinecone a developer favorite, Pinecone Serverless is launching with integrations to other best-in-class solutions in the GenAI technology stack, including Anthropic, Anyscale, Cohere, Confluent, Langchain, Pulumi, Vercel, and others to be announced soon.

“Vercel’s mission is to help the world ship the best products, and in the age of GenAI that requires Pinecone as the vector database component,” said Guillermo Rauch, CEO and Founder of Vercel. “That’s why we are announcing that all Vercel users can now add Pinecone Serverless to their applications in just a few clicks, with more exciting capabilities to come.”

“We’ve seen tremendous demand from our customers to connect Confluent to Pinecone in order to fuel real-time GenAI applications,” said Jay Kreps, CEO of Confluent. “Our Pinecone Sink Connector (Preview) allows organizations to send continuously enriched data streams from across the business to Pinecone so developers can build and scale real-time GenAI applications faster.”

Pinecone Serverless is available in public preview today in AWS cloud regions, and will be available thereafter on Azure and GCP. Try Pinecone for free, learn more about this release in the announcement blog post, and dive deep into the architecture and performance in the technical post.

About Pinecone

Pinecone created the vector database to help engineers build and scale remarkable AI applications. Vector databases have become a core component of GenAI applications, and Pinecone is the market-leading solution with over 5,000 customers of all types and sizes across all industries. Pinecone has raised $138M in funding from leading investors Andreessen Horowitz, ICONIQ Growth, Menlo Ventures, and Wing Venture Capital, and operates in New YorkSan Francisco, and Tel Aviv.


Source: Pinecone

Datanami