Skip to content

video/reddit-stories

Reddit Stories pipeline — Reddit post screenshots over background video with TTS narration.

Category: video
Source: workflows/video/reddit_stories.py

FieldTypeDefaultDescription
background_audiostring""Background music track
background_videostring""Video ID, category, ‘random’, or ‘none’
bg_audio_volumenumber0.05Background music volume
blocked_wordsstring[]Skip comments with these words
languagestring"en"Target language
max_commentsinteger6Max comments to include
max_lengthinteger600Max comment length (chars)
min_lengthinteger100Min comment length (chars) — shorter comments don’t make good video segments
min_scoreinteger10Min upvote score for comments
modestring"comments"’comments’ or ‘story’
overlay_opacitynumber1.0Screenshot overlay opacity (0.0-1.0)
platformstring"youtube_shorts"Target platform
qualitystring"local"Quality preset (default ‘local’). Other values: ‘free’, ‘budget’, ‘standard’, ‘premium’. The ‘local’ preset uses in-process Diffusers / Kokoro / Whisper backends and falls through to remote per-stage when a backend isn’t installed.
reddit_urlstring""Direct URL to a Reddit post
regenerateobjectWhen set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns.
subredditstring""Subreddit to search in
themestring"dark"’dark’, ‘light’, or ‘transparent’
topicstring""Search query within subreddit
tts_modelstring""TTS model override
variantsinteger1Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs.
voice_modestring""’single’, ‘alternating’, or ‘random’ (empty = preset default)

No schema defined.

fetch_reddit_post → select_comments → capture_screenshots → generate_segment_tts → fetch_background_video → merge_reddit_assets → compute_timeline_and_concat_audio → compose_reddit_video → mix_background_audio → burn_subtitles → burn_hook_overlay → generate_effect_filters → apply_effects → add_outro_fade → reddit-stories-output → collect_output
TaskDescription
fetch_reddit_postFetch Reddit post and comments via the public JSON API.
select_commentsFilter and rank top comments for comment mode.
capture_screenshotsCapture Reddit-style screenshots for all segments.
generate_segment_ttsGenerate TTS audio for each segment with per-commenter voice rotation.
fetch_background_videoSelect and prepare a background video.
merge_reddit_assetsMerge parallel branch outputs into a single context.
compute_timeline_and_concat_audioBuild the timeline and concatenate per-segment audio with silence gaps.
compose_reddit_videoCompose final video — overlay screenshots on background video timed to audio.
mix_background_audioMix background music under voiceover with speech-aware ducking.
burn_subtitlesBurn word-level subtitles.
burn_hook_overlayBurn hook text overlay — uses post title as the hook.
generate_effect_filtersUse Gemini to suggest FFmpeg filters based on content type.
apply_effectsApply AI-generated FFmpeg filters to the video.
add_outro_fadeAdd a fade-to-black outro at the end of the video.
reddit-stories-outputVideo output validation: reddit-stories-output
collect_outputCollect final output and save artifact.

Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.

workflow: video/reddit-stories
# Optional fields — copy any line(s) under `input:` and uncomment to set:
# Background music track
# background_audio: ""
#
# Video ID, category, 'random', or 'none'
# background_video: ""
#
# Background music volume
# bg_audio_volume: 0.05
#
# Skip comments with these words
# blocked_words: []
#
# Target language
# language: en
#
# Max comments to include
# max_comments: 6
#
# Max comment length (chars)
# max_length: 600
#
# Min comment length (chars) — shorter comments don't make good video segments
# min_length: 100
#
# Min upvote score for comments
# min_score: 10
#
# 'comments' or 'story'
# mode: comments
#
# Screenshot overlay opacity (0.0-1.0)
# overlay_opacity: 1.0
#
# Target platform
# platform: youtube_shorts
#
# Quality preset (default 'local'). Other values: 'free', 'budget', 'standard', 'premium'. The 'local' preset uses in-process Diffusers / Kokoro / Whisper backends and falls through to remote per-stage when a backend isn't installed.
# quality: local
#
# Direct URL to a Reddit post
# reddit_url: ""
#
# Subreddit to search in
# subreddit: ""
#
# 'dark', 'light', or 'transparent'
# theme: dark
#
# Search query within subreddit
# topic: ""
#
# TTS model override
# tts_model: ""
#
# 'single', 'alternating', or 'random' (empty = preset default)
# voice_mode: ""
#
input: {}

Run it locally:

Terminal window
fab-workflow --from-file my-run.yaml

Or submit over the wire — the same file is the request body:

Terminal window
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=video/reddit-stories' \
-H 'Authorization: Bearer fab_xxx' \
-H 'content-type: application/yaml' \
--data-binary @my-run.yaml

Every workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).

  • Last user task collect_output has no Pydantic return type — workflow output schema is null. Declare a WorkflowOutput subclass and pass it to Flow(output=…) for a strict contract.
  • Task merge_reddit_assets has no Pydantic types — contract is opaque to consumers.
  • Task burn_subtitles has no Pydantic types — contract is opaque to consumers.
  • Task reddit-stories-output has no Pydantic types — contract is opaque to consumers.