Skip to content

Formula Cloning

Formula Cloning lets you reverse-engineer any video’s style — visuals, audio, script structure, and effects — into a reusable formula, then generate new videos on different topics that match the same style exactly.

Feed in a reference video (or two) and Fabric extracts everything: color palette, camera work, hook type, narration pacing, music mood, subtitle style, and color grading. The formula maps directly to the AI Shorts pipeline inputs, so any extracted formula can drive full video generation.

Reference Video(s)
┌─────┴─────┐
│ Download │ (yt-dlp: YouTube, TikTok, Instagram)
│ + Probe │ (ffprobe: resolution, fps, duration)
└─────┬─────┘
┌─────┴─────┐
│ Transcribe │ (Whisper large-v3, word-level)
│ Extract │ (ffmpeg scene detection + sampling)
│ Keyframes │ (capped at 30 frames)
└─────┬─────┘
┌───────────┼───────────┬──────────┐
│ │ │ │
analyze_ analyze_ analyze_ analyze_
visuals audio script effects
(Gemini (Gemini) (Gemini) (Gemini
Vision) Vision)
│ │ │ │
└───────────┼───────────┴──────────┘
synthesize_formula
(consensus if 2+ videos)
save_formula
(~/.fabric/formulas/)

The four analysis branches run in parallel via Gemini, extracting:

Visuals

Color palette, color temperature, film stock, lighting style, camera focal length, movement, cut pacing, overall aesthetic

Audio

Voice gender/tone/pace, music genre/tempo/mood, volume level, sound effects style

Script

Hook type, emotional trigger, tone, segment structure, narration density, word choice patterns, CTA style, hook template

Effects

Subtitle style/colors/position, hook overlay, transitions, color grade, post-processing effects

Terminal window
# From a YouTube video
fab-workflow formula-extract \
--input video_paths='["https://www.youtube.com/watch?v=VIDEO_ID"]' \
--input formula_name="viral-style"
# From a local file
fab-workflow formula-extract \
--input video_paths='["/path/to/reference.mp4"]' \
--input formula_name="my-style"
# From multiple videos (extracts what's consistent)
fab-workflow formula-extract \
--input video_paths='["https://youtube.com/watch?v=abc", "https://youtube.com/watch?v=xyz"]' \
--input formula_name="brand-style"
Terminal window
fab-workflow formula-shorts \
--input formula="viral-style" \
--input topic="Why cold showers boost productivity" \
-o ./output
# With quality override
fab-workflow formula-shorts \
--input formula="viral-style" \
--input topic="5 habits that changed my life" \
--quality premium \
-o ./output
ParameterTypeDefaultDescription
video_pathsstring[]requiredList of video file paths or URLs (YouTube, TikTok, Instagram)
formula_namestring"untitled"Name for the saved formula
formula_pathstring~/.fabric/formulas/{name}.jsonCustom save path for the formula
whisper_modelstring"large-v3"Whisper model for transcription

All AI Shorts parameters are supported. The formula provides defaults that can be overridden:

ParameterTypeDefaultDescription
formulastringrequiredFormula name or path to formula JSON
topicstringrequiredThe subject of the new video
hookstring""Override the formula’s hook pattern
moodstringfrom formulaOverride the formula’s tone
visual_stylestringfrom formulaOverride the formula’s visual aesthetic
qualitystring""Quality preset (formula handles style, quality handles models)
duration_secsintfrom formulaOverride the formula’s pacing

Formulas are saved as JSON at ~/.fabric/formulas/{name}.json. Every field is optional — missing fields fall back to AI Shorts defaults.

{
"visuals": {
"color_palette": {
"dominant_colors": ["#ED4A2C", "#9000FF", "#1E1E1E"],
"color_temperature": "warm",
"saturation_level": "vibrant",
"contrast": "high"
},
"film_stock_match": "ARRI ALEXA Mini LF",
"lighting_style": {
"source": "mixed",
"behavior": "soft diffused",
"color_tone": "contrasting warm red and cool purple/blue tones"
},
"camera_work": {
"dominant_focal_length": "35mm",
"movement_style": "static",
"pacing": "moderate"
},
"cut_rhythm": {
"avg_segment_duration_secs": 4.0,
"pattern": "long takes intercut with graphic overlays"
},
"overall_aesthetic": "Modern high-contrast broadcast style with vibrant neon lighting"
}
}
{
"audio": {
"voice": {
"gender": "male",
"tone": "energetic",
"pace": "fast",
"style_description": "Fast, energetic commentary with personal interjections"
},
"music": {
"genre": "electronic",
"tempo": "upbeat",
"mood_prompt": "Upbeat suspenseful electronic instrumental with driving rhythm",
"volume_level": "subtle_background"
},
"sound_design": {
"has_sfx": false,
"sfx_style": "minimal"
}
}
}
{
"script": {
"hook_type": "curiosity_gap",
"emotional_trigger": "curiosity",
"tone": "conversational",
"structure": {
"segment_types": ["hook", "problem", "insight", "solution", "cta"],
"segment_durations": [3, 5, 8, 6, 4],
"total_segments": 5,
"alternation_pattern": "actor→broll→actor→broll→actor"
},
"narration_density": 1.5,
"word_choice_patterns": "Direct, fast-paced, informal phrases",
"cta_style": "Follow for more",
"hook_template": "[Person] did [unexpected action], and we found the reason why."
}
}

The hook_type uses the same taxonomy as Hook Generation: question, contrarian, cliffhanger, pain_point, shock, social_proof, story, fomo, challenge, counter_intuitive, curiosity_gap, authority, confession, list, comparison.

{
"effects": {
"subtitles": {
"style": "karaoke_highlight",
"position": "center",
"font_style": "bold_sans",
"colors": {
"active": "#FFFF00",
"inactive": "#FFFFFF",
"outline": "#000000"
},
"size": "large"
},
"hook_overlay": {
"has_text_overlay": true,
"style": "bold_caps",
"position": "center"
},
"transitions": "cut",
"color_grade": {
"ffmpeg_filter_hint": "warm_saturated",
"lut_description": "Vibrant saturated colors with deep blacks and natural skin tones"
},
"post_effects": "none"
}
}

When you pass 2+ reference videos, the extraction pipeline:

  1. Runs the full analysis on each video independently
  2. Sends all analyses to Gemini with a consensus prompt
  3. Keeps only what’s consistent across all videos — that’s the formula
  4. Excludes elements that vary between videos

This is useful for extracting a creator’s or brand’s signature style from multiple examples rather than copying a single video.

Terminal window
fab-workflow formula-extract \
--input video_paths='["/path/to/video1.mp4", "/path/to/video2.mp4", "/path/to/video3.mp4"]' \
--input formula_name="creator-style"

The formula doesn’t replace the AI Shorts pipeline — it constrains it. Each formula field maps to an existing pipeline input:

Formula FieldPipeline InputEffect
script.hook_typeextra_rulesForces a specific hook pattern
script.emotional_triggerextra_rulesSets the emotional through-line
script.tonemoodControls narration tone
script.structureextra_blocksConstrains segment sequence and durations
script.hook_templateextra_rulesPattern for the opening line
visuals.overall_aestheticvisual_styleUnified visual directive for all prompts
visuals.film_stock_matchShot design overrideOverrides continuity brief film stock
visuals.camera_workextra_rulesCamera/lens constraints for b-roll prompts
audio.voice.gendervoice_gender_overrideForces voice gender selection
audio.music.mood_promptmusic_mood_overrideCustom music generation prompt
audio.music.volume_levelbgm_volumeBackground music level
effects.subtitles.colorssubtitle_styleSubtitle highlight/inactive/outline colors
effects.color_gradecolor_grade_hintFFmpeg color grading constraints

User-provided inputs always override formula values — the formula is a default, not a lock.

You can create a formula by hand without a reference video. Create a JSON file at ~/.fabric/formulas/{name}.json with any subset of the schema fields above.

Terminal window
# List saved formulas
ls ~/.fabric/formulas/
# Use a manually created formula
fab-workflow formula-shorts \
--input formula="my-custom-style" \
--input topic="Your topic here"

The extraction pipeline accepts:

  • YouTubehttps://youtube.com/watch?v=... or https://youtu.be/...
  • TikTokhttps://tiktok.com/@user/video/... or https://vm.tiktok.com/...
  • Instagramhttps://instagram.com/reel/...
  • Direct URLs — Any https:// link to an .mp4 file
  • Local files — Absolute paths to video files on disk

Downloads use yt-dlp and are cached in temp storage during extraction.