global/ai-generate
AI generation pipeline — wraps Fabric’s provider API for multi-model access.
Category: global
Source: workflows/ai/generate.py
Input Schema
Section titled “Input Schema”| Field | Type | Default | Description |
|---|---|---|---|
anthropic_api_key | object | — | Per-request Anthropic API key. Falls back to the ANTHROPIC_API_KEY env var when absent. |
max_tokens | integer | 2048 | Maximum completion tokens to request. |
model | string | "gemini-2.0-flash" | Model identifier. Examples: ‘gemini-2.0-flash’, ‘gpt-4o-mini’, ‘claude-3-5-sonnet-20241022’. |
openai_api_key | object | — | Per-request OpenAI API key. Falls back to the OPENAI_API_KEY env var when absent. |
prompt | string | "" | User prompt sent to the model. |
provider | string | "auto" | Provider override. ‘auto’ routes through Fabric’s provider API; direct values bypass the router. |
regenerate | object | — | When set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns. |
system_prompt | object | — | Optional system prompt prepended to the conversation. |
temperature | number | 0.7 | Sampling temperature (0.0–2.0 for most providers). |
variants | integer | 1 | Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs. |
Output Schema
Section titled “Output Schema”| Field | Type | Default | Description |
|---|---|---|---|
kind | object | — | Variant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout. |
model_used | string | "" | The model string the provider actually ran. Matches the input model field for direct calls; may differ when Fabric’s router picks a fallback. |
provider_used | string | "" | Which provider actually served the request (‘fabric’, ‘openai’, ‘anthropic’, ‘gemini’, etc.). |
response | string | "" | Generated text from the model. |
usage | object | — | Token-usage dict from the provider. Shape varies but typically contains input_tokens / output_tokens. |
Task Pipeline
Section titled “Task Pipeline”ai_generate| Task | Description |
|---|---|
ai_generate | Generate text/content via Fabric’s provider API or direct SDK calls. |
Run-spec example
Section titled “Run-spec example”Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.
workflow: global/ai-generate
# Optional fields — copy any line(s) under `input:` and uncomment to set:# Per-request Anthropic API key. Falls back to the ``ANTHROPIC_API_KEY`` env var when absent.# anthropic_api_key: null## Maximum completion tokens to request.# max_tokens: 2048## Model identifier. Examples: 'gemini-2.0-flash', 'gpt-4o-mini', 'claude-3-5-sonnet-20241022'.# model: gemini-2.0-flash## Per-request OpenAI API key. Falls back to the ``OPENAI_API_KEY`` env var when absent.# openai_api_key: null## User prompt sent to the model.# prompt: ""## Provider override. 'auto' routes through Fabric's provider API; direct values bypass the router.# provider: auto## Optional system prompt prepended to the conversation.# system_prompt: null## Sampling temperature (0.0–2.0 for most providers).# temperature: 0.7#
input: {}Run it locally:
fab-workflow --from-file my-run.yamlOr submit over the wire — the same file is the request body:
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=global/ai-generate' \ -H 'Authorization: Bearer fab_xxx' \ -H 'content-type: application/yaml' \ --data-binary @my-run.yamlEvery workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).