Skip to content

video/reel-from-reference

Reel From Reference — analyze a reference reel and produce an original video in the same format.

Category: video
Source: workflows/video/reel_from_reference.py

FieldTypeDefaultDescription
audiencestring""Target audience.
backgroundstring"ai"Background video source: ‘ai’, ‘random’, ‘stock’, ‘solid’.
bgmbooleantrueAdd background music.
bgm_volumenumber0.12Background music volume (0.0-1.0).
creator_personastring"confident young creator, natural and conversational"Creator persona description.
enable_ocrbooleanfalseRun OCR on sampled frames.
platformstring"instagram_reels"Target platform.
qualitystring"standard"Quality preset: free, budget, standard, premium, local, local-power
reference_video_pathstring""Local path to reference video (MP4/MOV).
reference_video_urlstring""URL to download reference video (YouTube, TikTok, Instagram, direct link).
regenerateobjectWhen set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns.
subtitlesbooleantrueBurn karaoke captions onto video.
tonestring"energetic"Tone: energetic, calm, sarcastic, etc.
topicstringrequiredTopic for the new video.
variantsinteger1Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs.
whisper_modelstring"large-v3"Whisper model for transcription.
FieldTypeDefaultDescription
asset_idobjectFabric asset ID.
format_archetypestring""Detected format archetype.
format_dna_json_pathstring""Path to format_dna.json.
kindobjectVariant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout.
script_textstring""The narration text used.
summary_markdown_pathstring""Path to format summary.
video_pathstring""Path to the final rendered MP4.
validate_and_probe → detect_shots_and_visual_analysis → transcribe_and_analyze_audio → infer_format_dna → generate_recreation_script → prepare_input → generate_voiceover → generate_background → merge_branches → compose_video → add_subtitles → add_bgm → collect_output → collect_reel_output
TaskDescription
validate_and_probeDownload (if URL provided), validate input video, and extract metadata.
detect_shots_and_visual_analysisDetect shots, classify composition, analyze motion and color.
transcribe_and_analyze_audioTranscribe speech, map pauses, detect retention hooks, analyze audio layers.
infer_format_dnaUse LLM to infer structural patterns, then assemble FormatDNA.
generate_recreation_scriptGenerate an original script from format DNA, then bridge to quick-shorts.
prepare_inputValidate input and generate script if needed.
generate_voiceoverGenerate TTS voiceover — reuses SDK voiceover stage.
generate_backgroundGenerate per-segment AI video clips, or fall back to catalog/stock.
merge_branchesMerge parallel voiceover + background branches.
compose_videoCompose per-segment video clips with voiceover.
add_subtitlesBurn karaoke subtitles if enabled.
add_bgmGenerate and mix background music if enabled.
collect_outputPersist artifact, build upgrade payload, and return output.
collect_reel_outputCollect final output from quick-shorts and add format metadata.

Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.

workflow: video/reel-from-reference
# Optional fields — copy any line(s) under `input:` and uncomment to set:
# Target audience.
# audience: ""
#
# Background video source: 'ai', 'random', 'stock', 'solid'.
# background: ai
#
# Add background music.
# bgm: true
#
# Background music volume (0.0-1.0).
# bgm_volume: 0.12
#
# Creator persona description.
# creator_persona: "confident young creator, natural and conversational"
#
# Run OCR on sampled frames.
# enable_ocr: false
#
# Target platform.
# platform: instagram_reels
#
# Quality preset: free, budget, standard, premium, local, local-power
# quality: standard
#
# Local path to reference video (MP4/MOV).
# reference_video_path: ""
#
# URL to download reference video (YouTube, TikTok, Instagram, direct link).
# reference_video_url: ""
#
# Burn karaoke captions onto video.
# subtitles: true
#
# Tone: energetic, calm, sarcastic, etc.
# tone: energetic
#
# Whisper model for transcription.
# whisper_model: large-v3
#
input:
# Topic for the new video.
topic: ""

Run it locally:

Terminal window
fab-workflow --from-file my-run.yaml

Or submit over the wire — the same file is the request body:

Terminal window
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=video/reel-from-reference' \
-H 'Authorization: Bearer fab_xxx' \
-H 'content-type: application/yaml' \
--data-binary @my-run.yaml

Every workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).

  • Task generate_voiceover has no Pydantic types — contract is opaque to consumers.
  • Task merge_branches has no Pydantic types — contract is opaque to consumers.