Skip to content

Workflow Authoring

A workflow is a DAG (directed acyclic graph) of nodes that execute in dependency order with a shared context. Workflows enable multi-step pipelines like “download video, transcribe audio, generate summary.”

A reusable template that defines the graph of nodes, their dependencies, and configurations.

A specific execution instance of a workflow definition, with its own context, state, and event history.

A JSON object passed through the workflow. Nodes read inputs from context and write outputs back to it, enabling data flow between steps.

import { WorkflowBuilder } from "@fabric-platform/sdk";
const workflow = new WorkflowBuilder("transcribe-and-summarize")
.node("transcribe", "ai_invoke", (n) =>
n.config({
operation: "audio.transcribe",
modality: "audio",
})
.input("audio_url", "$context.audio.url")
.output("transcript", "$context.transcript.text")
)
.node("summarize", "ai_invoke", (n) =>
n.dependsOn("transcribe")
.config({
operation: "ai.generate",
modality: "text",
})
.input("prompt", "$context.transcript.text")
.output("summary", "$context.summary")
)
.build();
const fabric = new FabricClient({ apiKey: "fab_xxx" });
const defId = await fabric.registerDefinition(workflow);
// Create and start a run, stream events until completion
const result = await fabric.runDefinition(workflow, {
context: {
audio: { url: "https://example.com/podcast.wav" },
},
onEvent: (event) => {
console.log(`${event.node_key}: ${event.kind}`);
},
});
console.log("Summary:", result.context.summary);

Workflow runs support idempotency keys to prevent duplicate execution:

const runId = await fabric.runWorkflow(wfId, {
context: { audio: { url: "..." } },
idempotencyKey: "client-request-xyz",
});

Duplicate submissions with the same idempotency key return the original run.

Nodes communicate through the shared context using input/output bindings:

  • Input bindings resolve values from the shared context before execution (e.g., $context.audio.url)
  • Output bindings write results back to the shared context after execution
  • Merge policies control how outputs are written: Replace, Merge, or Append

The context version is tracked, enabling optimistic concurrency.

Nodes execute in dependency order using Kahn’s algorithm for topological sorting. Nodes with no unresolved dependencies execute concurrently.

Nodes support configurable retry behavior:

  • Maximum retry attempts
  • Backoff strategy (exponential)
  • Per-node timeout enforcement

Workflows support two failure strategies:

ModeBehavior
Fail-fastCancel remaining nodes when any node fails
Partial completionContinue executing independent branches even if one fails

The execution plane uses Postgres SELECT ... FOR UPDATE SKIP LOCKED for step claiming, ensuring:

  • No double-execution across workers
  • Automatic lease expiry if a worker crashes
  • Horizontal scaling with multiple executor workers
StateDescription
createdRun exists but hasn’t started
runningExecutor is processing nodes
completedAll nodes completed successfully
failedOne or more nodes failed
cancelledRun was cancelled

Subscribe to workflow run events via SSE:

EventDescription
workflow.run.createdRun was created
workflow.run.startedExecutor began processing
workflow.run.completedAll nodes completed
workflow.run.failedRun failed
workflow.run.cancelledRun was cancelled
workflow.node.readyNode dependencies satisfied
workflow.node.startedNode execution began
workflow.node.completedNode finished with output
workflow.node.failedNode execution failed
workflow.node.skippedNode skipped (dependency failed)