Follow Datanami:
August 25, 2023

DSPy Puts ‘Programming Over Prompting’ in AI Model Development

(Ton Wanniwat/Shutterstock)

If you’re tired of writing endless prompts for large language models, you might be interested in DSPy, a new framework from Stanford University that aims to enable programmers to work with foundational models by using a set of Python operations.

AI developers today rely on prompt engineering to prime LLMs into generating answers with the context they are looking for. Tools like LangChain, LlamaIndex, and others provide  the capability to “chain” together various components, including the LLM prompts, to build GenAI applications. However, the approach leaves much to be desired, particularly for programmers accustomed to having greater control.

DSPy seeks to eliminate the “hacky string manipulation” of prompt engineering with something that’s more deterministic and can be manipulated in a programmatic way. Specifically, it does this by providing “composable and declarative modules” for instructing foundation models in a Pythonic syntax, as well as providing “a compiler that teaches LMs how to conduct the declarative steps in your program,” according to the DSPy page on GitHub.

“DSPy unifies techniques for prompting and fine-tuning LMs as well as improving them with reasoning and tool/retrieval augmentation, all expressed through a minimalistic set of Pythonic operations that compose and learn,” the DSPy team writes. “Instead of brittle ‘prompt engineering’ with hacky string manipulation, you can explore a systematic space of modular and trainable pieces.”

DSPy users work in free-form Python code, and are free to code with loops, if statements, and exceptions, according to the project’s homepage. Developers use the framework, which is distributed openly under an MIT License, to build modules for their applications, such as retrieval-augmented generation (RAG) systems for question answering. The modules can be run as is in zero-shot mode, or compiled for greater accuracy. Users can also add more modules down the road, as their needs change.

“Let’s move past ‘prompt engineering’ & bloated brittle abstractions for LMs,” says Stanford computer science PhD candidate Omar Kattab, the lead DSPy contributor, in a post on X (formerly Twitter). “DSPy unifies prompting, finetuning, reasoning, retrieval augmentation—and delivers large gains for your pipelines.”

UC Berkeley Association Professor of Computer Science Matei Zaharia, who worked as an associate professor of computer science at Stanford until July and was involved in the DSPy project, says DSPy’s release this week is a big deal.

“We’ve spent the past 3 years working on LLM pipelines and retrieval-augmented apps in my group, and came up with this rich programming model based on our learnings,” Zaharia says on X. “It not only defines but *automatically optimizes* pipelines for you to get great results.”

Compared to tools like LangChain, which provide pre-written prompts for varoius LLMs, DSPy gives developers a more powerful abstraction for building GenAI apps, the framework backers say.

“Unlike these libraries, DSPy doesn’t internally contain hand-crafted prompts that target specific applications you can build,” they write on GitHub. “Instead, DSPy introduces a very small set of much more powerful and general-purpose modules that can learn to prompt (or finetune) your LM within your pipeline on your data.

DSPy is based on Demonstrate–Search–Predict, the previous version of the framework, which was released in January.  You can download the software at github.com/stanfordnlp/dspy.

Related Items:

Prompt Engineer: The Next Hot Job in AI

Are We Nearing the End of ML Modeling?

GenAI Debuts Atop Gartner’s 2023 Hype Cycle

 

Datanami