Skip to content

global/formula-shorts

Formula Shorts — generate videos that clone the style of reference videos.

Category: global
Source: workflows/video/formula_shorts.py

FieldTypeDefaultDescription
duration_secsinteger0Duration override (0 = use formula pacing)
formulaobject""Formula name (loads from ~/.fabric/formulas/<name>.json), path to a formula JSON file, or inline formula dict
moodstring""Mood/tone override (formula default used if empty)
platformstring"tiktok"Target platform
regenerateobjectWhen set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns.
script_formulastring""Optional script narrative formula (independent of the VideoFormula above). One of: ‘reframe’, ‘youre_doing_it_wrong’, ‘validation’, ‘pattern_interrupt’, ‘listicle’. The VideoFormula drives visuals/audio; script_formula drives narration structure. See fabric_workflow_sdk.stages.script_formulas.
topicstring""Video topic
variantsinteger1Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs.
visual_stylestring""Visual style override
FieldTypeDefaultDescription
audio_asset_idobjectFabric asset ID of the TTS audio
kindobjectVariant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout.
scriptstring""The narration script text
tagsstring[]Hashtags/keywords
titlestring""Generated title for the short
video_asset_idobjectFabric asset ID of the generated video
load_formula → apply_formula_inputs → normalize_input → generate_script → generate_keyframes → resolve_character_refs → merge_pre_production → reconcile_continuity → multiview_character_prep → generate_ai_actor → generate_voiceover → generate_all_broll → generate_bgm → merge_generation → generate_talking_heads → lipsync_talking_heads → mix_audio → compose_timeline → prepare_for_post_processing → burn_subtitles → burn_hook_overlay_safe → generate_effect_filters → apply_effects → add_outro_fade → ai-shorts-output → collect_final_output → export_for_platforms → evaluate_output → collect_ai_shorts_output
TaskDescription
load_formulaLoad and validate a VideoFormula from file or inline dict.
apply_formula_inputsMap formula fields to the input keys that ai_shorts pipeline expects.
normalize_inputNormalize simplified input params to the pipeline’s internal keys.
generate_scriptGenerate viral script — delegates to SDK stage.
generate_keyframesGenerate 2x2 grid keyframes for visual consistency across b-roll segments.
resolve_character_refsNormalize character_refs (URL/path/asset_id) → resolved URLs.
merge_pre_productionMerge parallel pre-production branches (keyframes + character refs).
reconcile_continuityVision-based continuity reconciliation (opt-in).
multiview_character_prepGenerate multi-view character references for stronger visual consistency.
generate_ai_actorGenerate AI actor portrait — delegates to SDK stage.
generate_voiceoverGenerate TTS voiceover — delegates to SDK stage.
generate_all_brollGenerate all b-roll clips — delegates to SDK stage.
generate_bgmGenerate background music — delegates to SDK stage.
merge_generationMerge parallel generation branches into a single context.
generate_talking_headsGenerate talking heads — delegates to SDK stage.
lipsync_talking_headsLip-sync talking heads — delegates to SDK stage.
mix_audioMix voiceover + BGM — delegates to SDK stage.
compose_timelineAssemble video timeline — delegates to SDK stage.
prepare_for_post_processingMap keys for post-processing — delegates to SDK stage.
burn_subtitlesBurn word-level subtitles — delegates to SDK stage.
burn_hook_overlay_safeBurn hook text overlay — delegates to SDK stage.
generate_effect_filtersUse Gemini to suggest FFmpeg filters based on content type.
apply_effectsApply AI-generated FFmpeg filters to the video.
add_outro_fadeAdd a fade-to-black outro at the end of the video.
ai-shorts-outputVideo output validation: ai-shorts-output
collect_final_outputCollect final output — delegates to SDK stage.
export_for_platformsConditionally resize + generate metadata for target platforms.
evaluate_outputScore the generated short for engagement potential.
collect_ai_shorts_outputExtract the AiShortsOutput fields from the pipeline result.

Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.

workflow: global/formula-shorts
# Optional fields — copy any line(s) under `input:` and uncomment to set:
# Duration override (0 = use formula pacing)
# duration_secs: 0
#
# Formula name (loads from ~/.fabric/formulas/<name>.json), path to a formula JSON file, or inline formula dict
# formula: ""
#
# Mood/tone override (formula default used if empty)
# mood: ""
#
# Target platform
# platform: tiktok
#
# Optional script narrative formula (independent of the VideoFormula above). One of: 'reframe', 'youre_doing_it_wrong', 'validation', 'pattern_interrupt', 'listicle'. The VideoFormula drives visuals/audio; script_formula drives narration structure. See fabric_workflow_sdk.stages.script_formulas.
# script_formula: ""
#
# Video topic
# topic: ""
#
# Visual style override
# visual_style: ""
#
input: {}

Run it locally:

Terminal window
fab-workflow --from-file my-run.yaml

Or submit over the wire — the same file is the request body:

Terminal window
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=global/formula-shorts' \
-H 'Authorization: Bearer fab_xxx' \
-H 'content-type: application/yaml' \
--data-binary @my-run.yaml

Every workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).

  • Task generate_script has no Pydantic types — contract is opaque to consumers.
  • Task merge_pre_production has no Pydantic types — contract is opaque to consumers.
  • Task reconcile_continuity has no Pydantic types — contract is opaque to consumers.
  • Task generate_ai_actor has no Pydantic types — contract is opaque to consumers.
  • Task generate_voiceover has no Pydantic types — contract is opaque to consumers.
  • Task generate_all_broll has no Pydantic types — contract is opaque to consumers.
  • Task generate_bgm has no Pydantic types — contract is opaque to consumers.
  • Task merge_generation has no Pydantic types — contract is opaque to consumers.
  • Task generate_talking_heads has no Pydantic types — contract is opaque to consumers.
  • Task lipsync_talking_heads has no Pydantic types — contract is opaque to consumers.
  • Task mix_audio has no Pydantic types — contract is opaque to consumers.
  • Task prepare_for_post_processing has no Pydantic types — contract is opaque to consumers.
  • Task burn_subtitles has no Pydantic types — contract is opaque to consumers.
  • Task burn_hook_overlay_safe has no Pydantic types — contract is opaque to consumers.
  • Task ai-shorts-output has no Pydantic types — contract is opaque to consumers.
  • Task collect_final_output has no Pydantic types — contract is opaque to consumers.