Provider Nodes
Provider nodes route AI operations through fabric-providers, enabling text generation, image creation, audio transcription, and embedding computation.
How It Works
Section titled “How It Works”- Node specifies an
operation(e.g.,audio.transcribe) andProviderRuntimeConfig(modality, model, tier) ProviderRuntimeExecutorbuilds aProviderRequestfrom the config and shared context- Request is routed through
ProviderRouter(tier-aware, cost-based, health-checked) - Usage metadata is captured in the
NodeExecutionResult
Routing
Section titled “Routing”The ProviderRouter selects a provider based on:
- Tier preference — basic (local/free) or premium (remote/paid)
- Cost — cheapest provider first within a tier
- Health — skip unhealthy providers
- Capability — provider must support the requested modality and operation
If a provider fails, the router falls back to the next eligible provider.
Streaming
Section titled “Streaming”Set "stream": true in the node config to use stream_execute. Chunks are collected and concatenated into the final output.
Streaming protocols by provider:
- OpenAI — SSE (Server-Sent Events)
- Ollama — NDJSON (Newline-Delimited JSON)
Example
Section titled “Example”NodeDefinition::provider("transcribe", "audio.transcribe", "audio") .input("audio_url", "$context.audio.url") .output("transcript", "$context.transcript", MergePolicy::Replace) .requires("provider.audio.transcribe")Cost Tracking
Section titled “Cost Tracking”Provider executions capture usage metadata in the result:
| Field | Description |
|---|---|
input_tokens | Number of input tokens consumed |
output_tokens | Number of output tokens generated |
estimated_cost_usd | Estimated cost in USD |
This data flows into Fabric’s cost tracking system, enabling per-request attribution by org, principal, API key, job, and workflow.
Available Providers
Section titled “Available Providers”| Provider | Tier | Modalities | Streaming |
|---|---|---|---|
| OpenAI | Premium | Text, Image (DALL-E), Embedding | Yes (SSE) |
| Anthropic | Premium | Text | No |
| Ollama | Basic | Text, Embedding | Yes (NDJSON) |
| Whisper | Basic | Audio | No |
| ComfyUI | Basic | Image (Stable Diffusion) | No (async poll) |
| ONNX | Basic | Configurable | No |
| Candle | Basic | Text, Embedding | No |
| Echo | Basic | All (test stub) | No |
Cost Estimation
Section titled “Cost Estimation”Before executing a provider node, estimate the cost:
curl -X POST http://localhost:3001/v1/providers/estimate \ -H 'content-type: application/json' \ -d '{ "modality": "text", "model": "gpt-4", "input": {"prompt": "Hello world"} }'Local models (Ollama, Whisper) always return $0.00.