Skip to content

global/clip-generator

Clip Generator — full OpenShorts-style pipeline with all post-processing.

Category: global
Source: workflows/clip/clip_generator.py

FieldTypeDefaultDescription
media_pathstring""Local media file path (alternative to url)
num_clipsinteger5Number of clips to extract
regenerateobjectWhen set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns.
urlstring""Video URL to process
variantsinteger1Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs.
FieldTypeDefaultDescription
clipsobject[]Processed clips
kindobjectVariant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout.
sourcestring""Source URL or path
total_clipsinteger0Number of clips produced
workflowstring"clip-generator"
download_media → transcribe_audio → detect_viral_moments → extract_clips → detect_and_track_subjects → crop_to_vertical → align_words → render_subtitles → generate_hook_text → burn_hook_overlay → generate_effect_filters → apply_effects → add_outro_fade → collect_final_clips
TaskDescription
download_mediaDownload video/audio from a URL or copy from a local path.
transcribe_audioTranscribe media — delegates to SDK stage (Whisper word-level timestamps).
detect_viral_momentsDetect high-potential viral moments from a video transcript using Gemini.
extract_clipsExtract individual clips from the source video using FFmpeg.
detect_and_track_subjectsDetect faces and track subjects across frames using MediaPipe + YOLO.
crop_to_verticalCrop video to 9:16 vertical following tracked subjects.
align_wordsFormat transcript segments into subtitle entries with word-level timing.
render_subtitlesBurn subtitles into the video using FFmpeg’s drawtext or ASS filter.
generate_hook_textGenerate hook text for a clip using an LLM.
burn_hook_overlayBurn hook text overlay onto the first 3 seconds of the video.
generate_effect_filtersUse Gemini to suggest FFmpeg filters based on content type.
apply_effectsApply AI-generated FFmpeg filters to the video.
add_outro_fadeAdd a fade-to-black outro at the end of the video.
collect_final_clipsCollect all processed clips into the final output.

Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.

workflow: global/clip-generator
# Optional fields — copy any line(s) under `input:` and uncomment to set:
# Local media file path (alternative to url)
# media_path: ""
#
# Number of clips to extract
# num_clips: 5
#
# Video URL to process
# url: ""
#
input: {}

Run it locally:

Terminal window
fab-workflow --from-file my-run.yaml

Or submit over the wire — the same file is the request body:

Terminal window
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=global/clip-generator' \
-H 'Authorization: Bearer fab_xxx' \
-H 'content-type: application/yaml' \
--data-binary @my-run.yaml

Every workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).