Skip to content

Provider Nodes

Provider nodes route AI operations through fabric-providers, enabling text generation, image creation, audio transcription, and embedding computation.

  1. Node specifies an operation (e.g., audio.transcribe) and ProviderRuntimeConfig (modality, model, tier)
  2. ProviderRuntimeExecutor builds a ProviderRequest from the config and shared context
  3. Request is routed through ProviderRouter (tier-aware, cost-based, health-checked)
  4. Usage metadata is captured in the NodeExecutionResult

The ProviderRouter selects a provider based on:

  1. Tier preference — basic (local/free) or premium (remote/paid)
  2. Cost — cheapest provider first within a tier
  3. Health — skip unhealthy providers
  4. Capability — provider must support the requested modality and operation

If a provider fails, the router falls back to the next eligible provider.

Set "stream": true in the node config to use stream_execute. Chunks are collected and concatenated into the final output.

Streaming protocols by provider:

  • OpenAI — SSE (Server-Sent Events)
  • Ollama — NDJSON (Newline-Delimited JSON)
NodeDefinition::provider("transcribe", "audio.transcribe", "audio")
.input("audio_url", "$context.audio.url")
.output("transcript", "$context.transcript", MergePolicy::Replace)
.requires("provider.audio.transcribe")

Provider executions capture usage metadata in the result:

FieldDescription
input_tokensNumber of input tokens consumed
output_tokensNumber of output tokens generated
estimated_cost_usdEstimated cost in USD

This data flows into Fabric’s cost tracking system, enabling per-request attribution by org, principal, API key, job, and workflow.

ProviderTierModalitiesStreaming
OpenAIPremiumText, Image (DALL-E), EmbeddingYes (SSE)
AnthropicPremiumTextNo
OllamaBasicText, EmbeddingYes (NDJSON)
WhisperBasicAudioNo
ComfyUIBasicImage (Stable Diffusion)No (async poll)
ONNXBasicConfigurableNo
CandleBasicText, EmbeddingNo
EchoBasicAll (test stub)No

Before executing a provider node, estimate the cost:

Terminal window
curl -X POST http://localhost:3001/v1/providers/estimate \
-H 'content-type: application/json' \
-d '{
"modality": "text",
"model": "gpt-4",
"input": {"prompt": "Hello world"}
}'

Local models (Ollama, Whisper) always return $0.00.