global/clip-generator
Clip Generator — full OpenShorts-style pipeline with all post-processing.
Category: global
Source: workflows/clip/clip_generator.py
Input Schema
Section titled “Input Schema”| Field | Type | Default | Description |
|---|---|---|---|
media_path | string | "" | Local media file path (alternative to url) |
num_clips | integer | 5 | Number of clips to extract |
regenerate | object | — | When set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns. |
url | string | "" | Video URL to process |
variants | integer | 1 | Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs. |
Output Schema
Section titled “Output Schema”| Field | Type | Default | Description |
|---|---|---|---|
clips | object[] | — | Processed clips |
kind | object | — | Variant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout. |
source | string | "" | Source URL or path |
total_clips | integer | 0 | Number of clips produced |
workflow | string | "clip-generator" |
Task Pipeline
Section titled “Task Pipeline”download_media → transcribe_audio → detect_viral_moments → extract_clips → detect_and_track_subjects → crop_to_vertical → align_words → render_subtitles → generate_hook_text → burn_hook_overlay → generate_effect_filters → apply_effects → add_outro_fade → collect_final_clips| Task | Description |
|---|---|
download_media | Download video/audio from a URL or copy from a local path. |
transcribe_audio | Transcribe media — delegates to SDK stage (Whisper word-level timestamps). |
detect_viral_moments | Detect high-potential viral moments from a video transcript using Gemini. |
extract_clips | Extract individual clips from the source video using FFmpeg. |
detect_and_track_subjects | Detect faces and track subjects across frames using MediaPipe + YOLO. |
crop_to_vertical | Crop video to 9:16 vertical following tracked subjects. |
align_words | Format transcript segments into subtitle entries with word-level timing. |
render_subtitles | Burn subtitles into the video using FFmpeg’s drawtext or ASS filter. |
generate_hook_text | Generate hook text for a clip using an LLM. |
burn_hook_overlay | Burn hook text overlay onto the first 3 seconds of the video. |
generate_effect_filters | Use Gemini to suggest FFmpeg filters based on content type. |
apply_effects | Apply AI-generated FFmpeg filters to the video. |
add_outro_fade | Add a fade-to-black outro at the end of the video. |
collect_final_clips | Collect all processed clips into the final output. |
Run-spec example
Section titled “Run-spec example”Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.
workflow: global/clip-generator
# Optional fields — copy any line(s) under `input:` and uncomment to set:# Local media file path (alternative to url)# media_path: ""## Number of clips to extract# num_clips: 5## Video URL to process# url: ""#
input: {}Run it locally:
fab-workflow --from-file my-run.yamlOr submit over the wire — the same file is the request body:
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=global/clip-generator' \ -H 'Authorization: Bearer fab_xxx' \ -H 'content-type: application/yaml' \ --data-binary @my-run.yamlEvery workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).