Skip to content

global/ai-generate

AI generation pipeline — wraps Fabric’s provider API for multi-model access.

Category: global
Source: workflows/ai/generate.py

FieldTypeDefaultDescription
anthropic_api_keyobjectPer-request Anthropic API key. Falls back to the ANTHROPIC_API_KEY env var when absent.
max_tokensinteger2048Maximum completion tokens to request.
modelstring"gemini-2.0-flash"Model identifier. Examples: ‘gemini-2.0-flash’, ‘gpt-4o-mini’, ‘claude-3-5-sonnet-20241022’.
openai_api_keyobjectPer-request OpenAI API key. Falls back to the OPENAI_API_KEY env var when absent.
promptstring""User prompt sent to the model.
providerstring"auto"Provider override. ‘auto’ routes through Fabric’s provider API; direct values bypass the router.
regenerateobjectWhen set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns.
system_promptobjectOptional system prompt prepended to the conversation.
temperaturenumber0.7Sampling temperature (0.0–2.0 for most providers).
variantsinteger1Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs.
FieldTypeDefaultDescription
kindobjectVariant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout.
model_usedstring""The model string the provider actually ran. Matches the input model field for direct calls; may differ when Fabric’s router picks a fallback.
provider_usedstring""Which provider actually served the request (‘fabric’, ‘openai’, ‘anthropic’, ‘gemini’, etc.).
responsestring""Generated text from the model.
usageobjectToken-usage dict from the provider. Shape varies but typically contains input_tokens / output_tokens.
ai_generate
TaskDescription
ai_generateGenerate text/content via Fabric’s provider API or direct SDK calls.

Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.

workflow: global/ai-generate
# Optional fields — copy any line(s) under `input:` and uncomment to set:
# Per-request Anthropic API key. Falls back to the ``ANTHROPIC_API_KEY`` env var when absent.
# anthropic_api_key: null
#
# Maximum completion tokens to request.
# max_tokens: 2048
#
# Model identifier. Examples: 'gemini-2.0-flash', 'gpt-4o-mini', 'claude-3-5-sonnet-20241022'.
# model: gemini-2.0-flash
#
# Per-request OpenAI API key. Falls back to the ``OPENAI_API_KEY`` env var when absent.
# openai_api_key: null
#
# User prompt sent to the model.
# prompt: ""
#
# Provider override. 'auto' routes through Fabric's provider API; direct values bypass the router.
# provider: auto
#
# Optional system prompt prepended to the conversation.
# system_prompt: null
#
# Sampling temperature (0.0–2.0 for most providers).
# temperature: 0.7
#
input: {}

Run it locally:

Terminal window
fab-workflow --from-file my-run.yaml

Or submit over the wire — the same file is the request body:

Terminal window
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=global/ai-generate' \
-H 'Authorization: Bearer fab_xxx' \
-H 'content-type: application/yaml' \
--data-binary @my-run.yaml

Every workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).