Node Reference
Fabric workflows are built by composing nodes — self-contained operations that process inputs and produce outputs. Each node declares its inputs, outputs, runtime requirements, and default configuration.
Prerequisites
Section titled “Prerequisites”Nodes that use Docker containers (FFmpeg, Whisper, yt-dlp, Python ML) require the tool images to be built first:
just docker-build-toolsNode Categories
Section titled “Node Categories”| Category | Description |
|---|---|
| AI | Text generation, extraction, classification, and embedding via AI providers |
| Media & Source | Video/audio ingestion, transcoding, and media processing |
| Video Analysis | Scene detection, face reframing, and video analysis |
| Text Processing | Title generation, description writing, and text refinement |
| Workflow Control | Fan-out, fan-in, conditional branching, and human approval |
| State Persistence | Key-value storage for workflow state |
| Other | HTTP calls, asset storage, and publishing |
Node Anatomy
Section titled “Node Anatomy”Every node has:
- Operation — canonical identifier (e.g.,
ai.generate,source.ingest) - Inputs — typed ports that accept data from upstream nodes or workflow context
- Outputs — typed ports that produce data for downstream nodes
- Runtime — how the node executes (AI provider, Docker container, native, webhook, etc.)
- Policy — timeout, retries, and failure behavior
- Configuration — node-specific settings (model, provider, format, etc.)
Using Nodes in Workflows
Section titled “Using Nodes in Workflows”Nodes are composed into workflows using the SDK or JSON definitions. Each node page includes usage examples in TypeScript, Python, and raw JSON.