video/reel-from-reference
Reel From Reference — analyze a reference reel and produce an original video in the same format.
Category: video
Source: workflows/video/reel_from_reference.py
Input Schema
Section titled “Input Schema”| Field | Type | Default | Description |
|---|---|---|---|
audience | string | "" | Target audience. |
background | string | "ai" | Background video source: ‘ai’, ‘random’, ‘stock’, ‘solid’. |
bgm | boolean | true | Add background music. |
bgm_volume | number | 0.12 | Background music volume (0.0-1.0). |
creator_persona | string | "confident young creator, natural and conversational" | Creator persona description. |
enable_ocr | boolean | false | Run OCR on sampled frames. |
platform | string | "instagram_reels" | Target platform. |
quality | string | "standard" | Quality preset: free, budget, standard, premium, local, local-power |
reference_video_path | string | "" | Local path to reference video (MP4/MOV). |
reference_video_url | string | "" | URL to download reference video (YouTube, TikTok, Instagram, direct link). |
regenerate | object | — | When set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns. |
subtitles | boolean | true | Burn karaoke captions onto video. |
tone | string | "energetic" | Tone: energetic, calm, sarcastic, etc. |
topic | string | required | Topic for the new video. |
variants | integer | 1 | Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs. |
whisper_model | string | "large-v3" | Whisper model for transcription. |
Output Schema
Section titled “Output Schema”| Field | Type | Default | Description |
|---|---|---|---|
asset_id | object | — | Fabric asset ID. |
format_archetype | string | "" | Detected format archetype. |
format_dna_json_path | string | "" | Path to format_dna.json. |
kind | object | — | Variant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout. |
script_text | string | "" | The narration text used. |
summary_markdown_path | string | "" | Path to format summary. |
video_path | string | "" | Path to the final rendered MP4. |
Task Pipeline
Section titled “Task Pipeline”validate_and_probe → detect_shots_and_visual_analysis → transcribe_and_analyze_audio → infer_format_dna → generate_recreation_script → prepare_input → generate_voiceover → generate_background → merge_branches → compose_video → add_subtitles → add_bgm → collect_output → collect_reel_output| Task | Description |
|---|---|
validate_and_probe | Download (if URL provided), validate input video, and extract metadata. |
detect_shots_and_visual_analysis | Detect shots, classify composition, analyze motion and color. |
transcribe_and_analyze_audio | Transcribe speech, map pauses, detect retention hooks, analyze audio layers. |
infer_format_dna | Use LLM to infer structural patterns, then assemble FormatDNA. |
generate_recreation_script | Generate an original script from format DNA, then bridge to quick-shorts. |
prepare_input | Validate input and generate script if needed. |
generate_voiceover | Generate TTS voiceover — reuses SDK voiceover stage. |
generate_background | Generate per-segment AI video clips, or fall back to catalog/stock. |
merge_branches | Merge parallel voiceover + background branches. |
compose_video | Compose per-segment video clips with voiceover. |
add_subtitles | Burn karaoke subtitles if enabled. |
add_bgm | Generate and mix background music if enabled. |
collect_output | Persist artifact, build upgrade payload, and return output. |
collect_reel_output | Collect final output from quick-shorts and add format metadata. |
Run-spec example
Section titled “Run-spec example”Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.
workflow: video/reel-from-reference
# Optional fields — copy any line(s) under `input:` and uncomment to set:# Target audience.# audience: ""## Background video source: 'ai', 'random', 'stock', 'solid'.# background: ai## Add background music.# bgm: true## Background music volume (0.0-1.0).# bgm_volume: 0.12## Creator persona description.# creator_persona: "confident young creator, natural and conversational"## Run OCR on sampled frames.# enable_ocr: false## Target platform.# platform: instagram_reels## Quality preset: free, budget, standard, premium, local, local-power# quality: standard## Local path to reference video (MP4/MOV).# reference_video_path: ""## URL to download reference video (YouTube, TikTok, Instagram, direct link).# reference_video_url: ""## Burn karaoke captions onto video.# subtitles: true## Tone: energetic, calm, sarcastic, etc.# tone: energetic## Whisper model for transcription.# whisper_model: large-v3#
input: # Topic for the new video. topic: ""Run it locally:
fab-workflow --from-file my-run.yamlOr submit over the wire — the same file is the request body:
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=video/reel-from-reference' \ -H 'Authorization: Bearer fab_xxx' \ -H 'content-type: application/yaml' \ --data-binary @my-run.yamlEvery workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).
Warnings
Section titled “Warnings”- Task
generate_voiceoverhas no Pydantic types — contract is opaque to consumers. - Task
merge_brancheshas no Pydantic types — contract is opaque to consumers.