video/research-to-shorts
Research → Hooks → AI Shorts — end-to-end pipeline.
Category: video
Source: workflows/video/research_to_shorts.py
Input Schema
Section titled “Input Schema”| Field | Type | Default | Description |
|---|---|---|---|
bgm_volume | number | 0.15 | Background music volume (0.0-1.0) |
duration_secs | integer | 60 | Target video duration in seconds |
mood | string | "" | Mood/tone for the video |
platform | string | "tiktok" | Target platform |
presenter_look | string | "" | Actor appearance description |
query | string | "" | Alias for topic (either works) |
regenerate | object | — | When set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns. |
topic | string | "" | Topic to research and produce a video about |
variants | integer | 1 | Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs. |
visual_style | string | "" | Visual style for b-roll generation |
Output Schema
Section titled “Output Schema”| Field | Type | Default | Description |
|---|---|---|---|
audio_asset_id | object | — | Fabric asset ID of the TTS audio |
kind | object | — | Variant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout. |
script | string | "" | The narration script text |
tags | string[] | — | Hashtags/keywords |
title | string | "" | Generated title |
video_asset_id | object | — | Fabric asset ID of the generated video |
Task Pipeline
Section titled “Task Pipeline”normalize_input → plan_research → search_web → read_webpages → research_youtube → fetch_rss_feeds → search_reddit → merge_research → synthesize_research → format_report → bridge_research_to_hooks → generate_hook_ideas → select_hook → generate_script → generate_ai_actor → generate_voiceover → generate_all_broll → generate_bgm → merge_generation → generate_talking_heads → lipsync_talking_heads → mix_audio → compose_timeline → prepare_for_post_processing → burn_subtitles → burn_hook_overlay_safe → generate_effect_filters → apply_effects → add_outro_fade → collect_final_output| Task | Description |
|---|---|
normalize_input | Map topic → query so callers can use either name. |
plan_research | Validate input and decide which sources to activate. |
search_web | Search the web via Exa (semantic search) or Jina Search (free fallback). |
read_webpages | Read top web results as clean markdown via Jina Reader (free, no API key). |
research_youtube | Search YouTube for relevant videos and extract metadata + transcript excerpts. |
fetch_rss_feeds | Parse RSS/Atom feeds for latest content. |
search_reddit | Search Reddit via shared Reddit client (old.reddit.com, UA rotation, retry). |
merge_research | Join function — merge parallel branch outputs into a unified context. |
synthesize_research | Analyze all collected sources with Gemini and produce a structured synthesis. |
format_report | Structure the final output, stripping internal keys. |
bridge_research_to_hooks | Transform deep-research output into hook-generation input. |
generate_hook_ideas | Generate hook ideas — delegates to SDK stage (70/20/10 strategy, PromptExtension). |
select_hook | Pick the strongest hook and prepare input for video script generation. |
generate_script | Generate a viral video script, optionally grounded in research findings. |
generate_ai_actor | Generate AI actor portrait — delegates to SDK stage. |
generate_voiceover | Generate TTS voiceover — delegates to SDK stage. |
generate_all_broll | Generate all b-roll clips — delegates to SDK stage. |
generate_bgm | Generate background music — delegates to SDK stage. |
merge_generation | Merge parallel generation branches into a single context. |
generate_talking_heads | Generate talking heads — delegates to SDK stage. |
lipsync_talking_heads | Lip-sync talking heads — delegates to SDK stage. |
mix_audio | Mix voiceover + BGM — delegates to SDK stage. |
compose_timeline | Assemble video timeline — delegates to SDK stage. |
prepare_for_post_processing | Map keys for post-processing — delegates to SDK stage. |
burn_subtitles | Burn word-level subtitles — delegates to SDK stage. |
burn_hook_overlay_safe | Burn hook text overlay — delegates to SDK stage. |
generate_effect_filters | Use Gemini to suggest FFmpeg filters based on content type. |
apply_effects | Apply AI-generated FFmpeg filters to the video. |
add_outro_fade | Add a fade-to-black outro at the end of the video. |
collect_final_output | Collect final output — delegates to SDK stage. |
Run-spec example
Section titled “Run-spec example”Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.
workflow: video/research-to-shorts
# Optional fields — copy any line(s) under `input:` and uncomment to set:# Background music volume (0.0-1.0)# bgm_volume: 0.15## Target video duration in seconds# duration_secs: 60## Mood/tone for the video# mood: ""## Target platform# platform: tiktok## Actor appearance description# presenter_look: ""## Alias for topic (either works)# query: ""## Topic to research and produce a video about# topic: ""## Visual style for b-roll generation# visual_style: ""#
input: {}Run it locally:
fab-workflow --from-file my-run.yamlOr submit over the wire — the same file is the request body:
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=video/research-to-shorts' \ -H 'Authorization: Bearer fab_xxx' \ -H 'content-type: application/yaml' \ --data-binary @my-run.yamlEvery workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).
Warnings
Section titled “Warnings”- Task
merge_researchhas no Pydantic types — contract is opaque to consumers. - Task
generate_hook_ideashas no Pydantic types — contract is opaque to consumers. - Task
generate_ai_actorhas no Pydantic types — contract is opaque to consumers. - Task
generate_voiceoverhas no Pydantic types — contract is opaque to consumers. - Task
generate_all_brollhas no Pydantic types — contract is opaque to consumers. - Task
generate_bgmhas no Pydantic types — contract is opaque to consumers. - Task
merge_generationhas no Pydantic types — contract is opaque to consumers. - Task
generate_talking_headshas no Pydantic types — contract is opaque to consumers. - Task
lipsync_talking_headshas no Pydantic types — contract is opaque to consumers. - Task
mix_audiohas no Pydantic types — contract is opaque to consumers. - Task
prepare_for_post_processinghas no Pydantic types — contract is opaque to consumers. - Task
burn_subtitleshas no Pydantic types — contract is opaque to consumers. - Task
burn_hook_overlay_safehas no Pydantic types — contract is opaque to consumers. - Task
collect_final_outputhas no Pydantic types — contract is opaque to consumers.