Been working through what "dynamic pipelines done right" looks like in practice.

The key insight: separate what the *framework* should control from what the *agent* should control.

In ZenML's dynamic pipelines, responsibilities split cleanly:

ZenML controls:
→ Fan-out (spawn N agents based on runtime data)
→ Budget and depth limits
→ Step orchestration and DAG construction
→ Artifact tracking and lineage
→ Failure handling per step (retries, fallbacks)

The agent controls:

→ "Does this document answer the query?"
→ "Should I traverse deeper?"
→ Natural language reasoning about next steps

Each traversal call appears as a separate step in the DAG (i.e. dynamically created at runtime based on agent decisions). You get the robustness of a pipeline framework (retries, caching, lineage, the ability to debug a specific step) without losing the flexibility agents need to explore.

The technical primitives that make this work:

step.map() fans out a step across a sequence—each item spawns a parallel execution. step.submit() returns futures so you can track completion and pass results downstream. Steps can run inline (in the orchestrator's process, faster) or isolated (separate container, better for parallelism and resource isolation).

The pattern that took me a bit to grok: .chunk() vs .load(). When you're looping through artifacts, .chunk(idx) creates the DAG edge—it tells the orchestrator "this step depends on item X from upstream." .load() just gets the value for your Python control flow. You need both: chunks for wiring, loads for decisions.

We just published an example showing all this: a hierarchical document search agent. Simple queries stay simple (fast keyword search). Deep queries fan out to traversal agents that explore a document graph, with configurable depth and max-agent limits. The agent decides "answer found" or "traverse deeper," and @zenml_io handles spawning the next round of steps.

The DAG ends up looking like the attached diagram.

Each traverse_node is a separate step, created at runtime, with its own artifacts, retries, and lineage.

#MLOps #ZenML #agents #pipelines #LangChain

Dynamic Pipelines (Experimental) | ZenML - Bridging the gap between ML & Ops

Write dynamic pipelines