If Dspy is so great, why isn't anyone using it?

https://skylarbpayne.com/posts/dspy-engineering-patterns/

If DSPy is So Great, Why Isn't Anyone Using It?

Any sufficiently complicated AI system contains an ad hoc, informally-specified, bug-ridden implementation of half of DSPy.

Skylar Payne

I tried it in the past, one time “in earnest.” But when I discovered that none of my actual optimized prompts were extractable, I got cold feet and went a different route. The idea of needing to do fully commit to a framework scares me. The idea of having a computer optimize a prompt as a compilation step makes a lot of sense, but treating the underlying output prompt as an opaque blob doesn’t. Some of my use cases were JUST off of the beaten path that dspy was confusing, which didn’t help. And lastly, I felt like committing to dspy meant that I would be shutting the door on any other framework or tool or prompting approach down the road.

I think I might have just misunderstood how to use it.

I don't know that you misunderstood. This is one of my biggest gripes with Dspy as well. I think it takes the "prompt is a parameter" concept a bit too far.

I highly recommend checking out this community plugin from Maxime, it helps "bridge the gap": https://github.com/dspy-community/dspy-template-adapter

Mannnn, here I thought this was going to be an informative article! But it’s just a commercial for the author’s consulting business.
Oops! That's actually out of date from prior template I had. I don't actually consult at the moment :). Removing!
The author itself is probably ai-generated. The contact section in the blog is just placeholder values. I think the age of informative articles is gone
This is definitely a mistake! What contact section are you referring to? The only references to contact I see in this post now are at the end where I linked to my X/LinkedIn profiles but those links look right to me?
I work with author; author is definitely not AI generated.

> f"Extract the company name from: {text}"

I think one thing that's lost in all of the LLM tooling is that it's LLM-or-nothing and people have lost knowledge of other ML approaches that actually work just fine, like entity recognition.

I understand it's easier to just throw every problem at an LLM but there are things where off-the-shelf ML/NLP products work just as well without the latency or expense.

Oh 100%! There are many problems (including this one!) that probably aren't best suited for an LLM. I was just trying to pick a really simple example that most people would follow.

I don't see it at all.

> Typed I/O for every LLM call. Use Pydantic. Define what goes in and out.

Sure, not related to DSPy though, and completely tablestakes. Also not sure why the whole article assumes the only language in the world is Python.

> Separate prompts from code. Forces you to think about prompts as distinct things.

There's really no reason prompts must live in a file with a .md or .json or .txt extension rather than .py/.ts/.go/.., except if you indeed work at a company that decided it's a good idea to let random people change prod runtime behavior. If someone can think of a scenario where this is actually a good idea, feel free to elighten me. I don't see how it's any more advisable than editing code in prod while it's running.

> Composable units. Every LLM call should be testable, mockable, chainable.

> Abstract model calls. Make swapping GPT-4 for Claude a one-line change.

And LiteLLM or `ai` (Vercel), the actually most used packages, aren't? You're comparing downloads with Langchain, probably the worst package to gain popularity of the last decade. It was just first to market, then after a short while most realized it's horrifically architected, and now it's just coasting on former name recognition while everyone who needs to get shit done uses something lighter like the above two.

> Eval infrastructure early. Day one. How will you know if a change helped?

Sure, to an extent. Outside of programming, most things where LLMs deliver actual value are very nondeterministic with no right answer. That's exactly what they offer. Plenty of which an LLM can't judge the quality of. Having basic evals is useful, but you can quickly run into their development taking more time than it's worth.

But above all.. the comments on this post immediately make clear that the biggest differentiator of DSPy is the prompt optimization. Yet this article doesn't mention that at all? Weird.

>the whole article assumes the only language in the world is Python.

This was my take as well.

My company recently started using Dspy, but you know what? We had to stand up an entire new repo in Python for it, because the vast majority of our code is not Python.

I think this is an important point! I am actually a big fan of doing what works in the language(s) you're already using.

For example: I don't use Dspy at work! And I'm working in a primarily dotnet stack, so we definitely don't use Dspy... But still, I see the same patterns seeping through that I think are important to understand.

And then there's a question of "how do we implement these patterns idiomatically and ergonomically in our codebase/langugage?"

I think all of these things are table-stakes; yet I see that they are implemented/supported poorly across many companies. All I'm saying is there are some patterns here that are important, and it makes sense to enter into building AI systems understanding them (whether or not you use Dspy) :)
I used dspy in production, then reverted the bloat as it literally gave me nothing of added value in practice but a lot of friction when i needed precise control over the context. Avoid!