video/slideshow
Slideshow / Carousel pipeline — generate listicle content for TikTok/Instagram/LinkedIn.
Category: video
Source: workflows/video/slideshow.py
Input Schema
Section titled “Input Schema”| Field | Type | Default | Description |
|---|---|---|---|
film_grain | boolean | true | Apply UGC-style film grain + warm tint for organic feel |
language | string | "en" | Language code |
mixed_media | boolean | false | Use video hook slide for Instagram mixed-media carousel |
niche | string | "" | Content niche |
num_items | integer | 5 | Number of list items (slides) |
previous_topics | string[] | — | Previously generated topics for dedup |
quality | string | "local" | Quality preset (default ‘local’). Other values: ‘free’, ‘budget’, ‘standard’, ‘premium’. The ‘local’ preset uses in-process Diffusers / Kokoro / Whisper backends and falls through to remote per-stage when a backend isn’t installed, so machines without the local stack still produce a slideshow — they just call out to remote APIs for the missing pieces. |
regenerate | object | — | When set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns. |
slide_duration | number | 3.5 | Duration per slide in seconds |
slide_layout | string | "overlay_gradient" | Slide layout style |
style_hint | string | "" | Visual style hint for image sourcing |
topic | string | "" | Listicle topic |
transition_duration | number | 0.5 | Transition duration in seconds |
variants | integer | 1 | Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs. |
visual_style | string | "" | Visual style override |
Output Schema
Section titled “Output Schema”| Field | Type | Default | Description |
|---|---|---|---|
hook_text | string | "" | Hook text |
hook_video_path | object | — | Path to video hook slide (mixed-media) |
kind | object | — | Variant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout. |
pdf_path | object | — | Path to multi-page PDF carousel |
platform_metadata | object | — | Per-platform metadata |
slide_count | integer | 0 | Number of slides |
slide_png_paths | string[] | — | Paths to individual slide PNGs |
topic | string | "" | Generated topic |
video_path | object | — | Path to stitched MP4 slideshow video |
Task Pipeline
Section titled “Task Pipeline”generate_listicle → source_slide_images → generate_slideshow_voiceover → generate_slideshow_bgm → generate_video_hook_slide → merge_slideshow_assets → compose_slides → export_slide_pngs → assemble_video_slideshow → apply_ugc_treatment → generate_pdf → generate_platform_metadata| Task | Description |
|---|---|
generate_listicle | Generate listicle script with cross-run dedup. |
source_slide_images | Source images for each slide based on image_prompts from script. |
generate_slideshow_voiceover | Generate voiceover from the full listicle narration. |
generate_slideshow_bgm | Generate background music. |
generate_video_hook_slide | Generate a short video clip for the hook slide (Instagram mixed-media carousel). |
merge_slideshow_assets | Merge parallel generation branches. |
compose_slides | Apply text overlays to sourced images, producing final slide PNGs. |
export_slide_pngs | Save individual slide PNGs as artifacts (carousel-ready). |
assemble_video_slideshow | Stitch slides into MP4 with transitions, voiceover, and music. |
apply_ugc_treatment | Apply film grain + warm tint to video and slide PNGs for organic UGC feel. |
generate_pdf | Convert slide PNGs to a multi-page PDF for LinkedIn document carousels. |
generate_platform_metadata | Generate per-platform metadata for the slideshow content. |
Run-spec example
Section titled “Run-spec example”Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.
workflow: video/slideshow
# Optional fields — copy any line(s) under `input:` and uncomment to set:# Apply UGC-style film grain + warm tint for organic feel# film_grain: true## Language code# language: en## Use video hook slide for Instagram mixed-media carousel# mixed_media: false## Content niche# niche: ""## Number of list items (slides)# num_items: 5## Previously generated topics for dedup# previous_topics: []## Quality preset (default 'local'). Other values: 'free', 'budget', 'standard', 'premium'. The 'local' preset uses in-process Diffusers / Kokoro / Whisper backends and falls through to remote per-stage when a backend isn't installed, so machines without the local stack still produce a slideshow — they just call out to remote APIs for the missing pieces.# quality: local## Duration per slide in seconds# slide_duration: 3.5## Slide layout style# slide_layout: overlay_gradient## Visual style hint for image sourcing# style_hint: ""## Listicle topic# topic: ""## Transition duration in seconds# transition_duration: 0.5## Visual style override# visual_style: ""#
input: {}Run it locally:
fab-workflow --from-file my-run.yamlOr submit over the wire — the same file is the request body:
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=video/slideshow' \ -H 'Authorization: Bearer fab_xxx' \ -H 'content-type: application/yaml' \ --data-binary @my-run.yamlEvery workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).
Warnings
Section titled “Warnings”- Task
merge_slideshow_assetshas no Pydantic types — contract is opaque to consumers.