Skip to content

video/avatar

Avatar talking-head generation — standalone workflow.

Category: video
Source: workflows/video/avatar.py

FieldTypeDefaultDescription
actoranyrequiredReference to the actor portrait image (PNG/JPG). Face must be clearly visible.
audioanyrequiredReference to the audio clip to lip-sync (MP3/WAV). Drives the talking-head duration.
avatar_modelstring"fal-ai/kling-video/ai-avatar/v2/standard"Provider/model id. Supported families: fal-ai/kling-video/ai-avatar/*, seedance/*, bytedance/omnihuman*, or any local model registered via _LOCAL_AVATAR_MAP.
promptstring"Person speaking directly to camera, relaxed posture, natural expression, warm soft key light, shallow depth of field, authentic vlog energy."Stylistic prompt sent to the avatar model. Ignored by pure lipsync providers.
regenerateobjectWhen set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns.
variantsinteger1Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs.
FieldTypeDefaultDescription
asset_idobjectFabric asset id, populated when the run had server access.
avatar_modelstringrequiredModel that produced the clip.
kindobjectVariant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout.
video_pathstringrequiredLocal path (or output_dir copy) of the generated mp4.
render_avatar
TaskDescription
render_avatarResolve inputs → call the avatar stage → persist the result.

Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.

workflow: video/avatar
# Optional fields — copy any line(s) under `input:` and uncomment to set:
# Provider/model id. Supported families: `fal-ai/kling-video/ai-avatar/*`, `seedance/*`, `bytedance/omnihuman*`, or any local model registered via `_LOCAL_AVATAR_MAP`.
# avatar_model: fal-ai/kling-video/ai-avatar/v2/standard
#
# Stylistic prompt sent to the avatar model. Ignored by pure lipsync providers.
# prompt: "Person speaking directly to camera, relaxed posture, natural expression, warm soft key light, shallow depth of field, authentic vlog energy."
#
input:
# Typed reference to a file input.
actor: {}
# Typed reference to a file input.
audio: {}

Run it locally:

Terminal window
fab-workflow --from-file my-run.yaml

Or submit over the wire — the same file is the request body:

Terminal window
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=video/avatar' \
-H 'Authorization: Bearer fab_xxx' \
-H 'content-type: application/yaml' \
--data-binary @my-run.yaml

Every workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).