Skip to content

Content Generation

Content workflows generate platform-native text — captions, posts, ad copy, and multi-part series — with tone control and media awareness for each social platform.

Workflow: global/platform-adapt

Takes a single hook or concept and produces platform-native text for each target platform. Each output respects the platform’s character limits, media context, formatting conventions, and cultural voice.

Terminal window
# Caption mode — text to accompany an image or video
fab-workflow global/platform-adapt \
--input hook_text="3 things nobody tells you about cold showers" \
--input platforms=instagram,linkedin,twitter,tiktok \
--input mode=caption \
--input visual_description="person stepping into ice bath at sunrise"
# Standalone mode — self-contained text posts
fab-workflow global/platform-adapt \
--input hook_text="3 things nobody tells you about cold showers" \
--input platforms=twitter,linkedin,threads \
--input mode=standalone \
--input tone_mode=conversational
ModeUse CaseText Relationship to Visual
captionInstagram post, TikTok video, YouTube descriptionReferences and complements the visual
standaloneTweet thread, LinkedIn article, Threads postText carries the full message

All text-generating workflows support a tone_mode parameter:

ToneDescriptionBest For
aggressiveScroll-stopping, bold, provocative. Every word earns its place.Short-form video hooks, attention-grabbing posts
narrativeStorytelling warmth, vulnerability, emotional arc. The reader should feel something.Instagram captions, LinkedIn posts, series content
conversationalCasual first-person, like texting a smart friend. Relatable over impressive.Threads, Twitter, community engagement

Each platform has built-in constraints for character limits, hashtag strategy, dominant media type, and voice:

PlatformPrimary MediaMax CharsTarget LengthHashtags
twitterText-first, often with image280Fit in limit1-3
linkedinText-first, optional image/doc3,000~200 words3-5
instagramImage or video, always2,200~150 words20-30
threadsText-first, optional image500Fit in limit0-2
facebookMixed (video, image, link, text)63,206~200 words1-3
redditText or link post40,000~250 wordsNone
tiktokVideo, always2,200~50 words3-5
youtubeVideo, always5,000~200 words3-5
ParameterTypeDefaultDescription
hook_textstringrequiredCore hook or concept to adapt
topicstring""Broader topic context
platformsstring or list["instagram", "linkedin", "twitter"]Target platforms (comma-separated or list)
modestring"caption""caption" or "standalone"
tone_modestring"aggressive""aggressive", "narrative", or "conversational"
visual_descriptionstring""What the image/video shows (caption mode)
brand_voicestring""Brand voice guidelines
target_audiencestring""Audience description
nichestring""Content niche
{
"platform_texts": [
{
"platform": "twitter",
"text": "Nobody tells you this about cold showers: the benefits kick in at week 3, not day 1. Most people quit at day 5.",
"hashtags": ["coldshowers", "biohacking"],
"char_count": 112,
"tone_mode": "conversational",
"is_thread": false,
"thread_tweets": null
},
{
"platform": "linkedin",
"text": "I've taken cold showers every morning for 6 months.\n\nHere's what nobody tells you:\n\nThe first week is pure willpower...",
"hashtags": ["wellness", "habits", "coldexposure"],
"char_count": 847,
"tone_mode": "conversational",
"is_thread": false,
"thread_tweets": null
}
]
}

For Twitter, if the concept needs more than 280 characters, the system produces a thread:

{
"platform": "twitter",
"text": "Nobody tells you this about cold showers:",
"is_thread": true,
"thread_tweets": [
"Nobody tells you this about cold showers:",
"The benefits don't kick in at day 1. They kick in at week 3...",
"Here's what actually happens to your body..."
]
}

Workflow: content/digital-product

End-to-end workflow that generates a complete sellable digital product from a topic or niche. Discovers real problems from Reddit, identifies the highest-value audience, generates hooks, designs a solution, creates a professionally designed PDF guide (V1 basic + V2 premium), a conversion-optimized landing page, Gumroad copy, social media visuals, an email launch sequence, and a full distribution pack.

Terminal window
# Minimal — topic only
fab-workflow content/digital-product \
--input topic="AI security for founders"
# Full control
fab-workflow content/digital-product \
--input topic="meal prep for busy parents" \
--input product_type=checklist \
--input price_range='$9-$14' \
--input platforms=instagram,tiktok,twitter \
--input tone_mode=conversational
research_reddit → select_audience → lock_brand_voice → refine_problems →
generate_hooks → design_solution →
fork(
branch: structure_pdf → generate_v1 → quality_gate → generate_v2,
branch: generate_pdf_design
) → render_pdf →
fork(
branch: landing_page,
branch: gumroad_page,
branch: social_visuals,
branch: distribution_pack,
branch: email_sequence
) → package_output
ParameterTypeDefaultDescription
topicstringrequiredTopic or niche (e.g. “AI security for founders”)
niche_overridestring""Override auto-detected niche
price_rangestring"$19-$29"Target price for positioning copy
product_typestring"guide"guide, checklist, template, or toolkit
platformslist[str]["twitter", "linkedin", "tiktok", "instagram"]Social platforms for distribution
tone_modestring"conversational"aggressive, narrative, or conversational
min_quality_scorefloat6.0Minimum PDF quality score (1-10), triggers retry if below

The workflow produces 12+ artifacts:

ArtifactFormatDescription
{title}-v1.pdfPDFCore guide — professionally styled with cover image
{title}-v2-premium.pdfPDFPremium version with case studies, worksheets, calendar
{title}-v1.md / v2.mdMarkdownRaw content for further editing
design-system.jsonJSONColors, fonts, layout tokens
landing-page.htmlHTMLSelf-contained, mobile-responsive, SEO-ready landing page
gumroad-copy.mdMarkdownGumroad product page copy with FAQ
distribution-pack.jsonJSON20 TikTok hooks, 5 scripts, Twitter thread, Reddit post, captions
email-sequence.jsonJSON5-email launch sequence (teaser → launch → proof → value → urgency)
cover.pngImageAI-generated product cover
social-card-*.pngImagesPlatform-sized social cards with hook overlays

The V1 content passes through an LLM quality evaluation scoring completeness, actionability, clarity, emotional resonance, and coherence. If the score falls below min_quality_score, the content generation task retries automatically — implementing a “generate → evaluate → regenerate” loop.

An early lock_brand_voice task defines tone, vocabulary register, sentence style, power words, and words to avoid — based on the detected audience. This voice guide is injected into every downstream content generation prompt, ensuring tonal consistency across the PDF, landing page, emails, and social posts.


Workflow: global/content-series

Generates a connected multi-part content series with narrative threading. Each part stands alone but contains forward references (cliffhangers) and backward callbacks, creating anticipation that drives follow/subscribe behavior.

Terminal window
fab-workflow global/content-series \
--input topic="why most startups fail at hiring" \
--input num_parts=3 \
--input platform=linkedin \
--input tone_mode=narrative

The system enforces a narrative arc across all parts:

Part 1: SETUP
Hook hard. Establish the problem.
End with cliffhanger → "Tomorrow I'll show you what happened when..."
Part 2: COMPLICATION
Callback → "Yesterday I told you about X. Here's what I didn't mention..."
Deepen the tension. Introduce twists.
End with cliffhanger → next part
Part 3: RESOLUTION
Callback → previous parts
Deliver the payoff. Insight must feel EARNED.

Each part includes a visual_suggestion for what image or video should accompany it, maintaining visual continuity across the series.

ParameterTypeDefaultDescription
topicstringrequiredOverarching topic or theme
num_partsint3Number of parts in the series
platformstring"instagram"Primary platform
tone_modestring"narrative"Tone mode
hook_ideaslist[dict][]Pre-generated hooks to weave in
nichestring""Content niche
target_audiencestring""Audience description
{
"series": [
{
"part_number": 1,
"hook": "We hired 12 people last year. 8 were wrong.",
"body": "Full post text for Part 1...",
"overlay_text": "8 WRONG HIRES",
"visual_suggestion": "Split-screen: empty office chairs vs. packed meeting room",
"cliffhanger_for_next": "Tomorrow I'll tell you exactly which interview question predicted every bad hire.",
"callback_to_previous": null,
"standalone_summary": "Most hiring processes optimize for skills over culture fit, leading to high turnover."
},
{
"part_number": 2,
"hook": "Yesterday I told you we made 8 bad hires. Here's the question we were afraid to ask.",
"body": "Full post text for Part 2...",
"overlay_text": "THE QUESTION",
"visual_suggestion": "Close-up of interview setting, two chairs facing each other",
"cliffhanger_for_next": "But knowing the right question wasn't enough. Part 3: the system we built after nearly going under.",
"callback_to_previous": "Yesterday I told you we made 8 bad hires last year.",
"standalone_summary": "One interview question predicted culture fit with 85% accuracy."
}
]
}

Workflow: global/ad-sequence

Generates a multi-phase retargeting ad sequence with escalating psychological momentum. Unlike generating N independent ads, this produces a temporally sequenced campaign where each phase builds on the previous one.

Terminal window
fab-workflow global/ad-sequence \
--input product="AI writing assistant for solopreneurs" \
--input audience="solopreneurs who write daily but struggle with consistency" \
--input platforms=instagram,facebook \
--input sequence_days=7
PhaseDaysPsychologyEmotional LeverWhat NOT to Do
Awareness1-2”This brand gets me”Empathy, recognitionNo selling, no product, no discounts
Consideration3-5”Maybe this could work”Trust, social proofNo urgency, no pressure
Conversion6-7”I need to act NOW”Urgency, scarcityNo soft language, be direct

Each phase produces 2 ad variants per platform (A/B testing).

ParameterTypeDefaultDescription
productstringrequiredProduct/service description
audiencestring""Target audience
pain_pointslist[str][]Audience pain points
platformsstring or list["instagram", "facebook"]Target platforms
sequence_daysint7Campaign duration in days
tone_modestring"aggressive"Tone mode
unique_valuestring""Product differentiator
social_proofstring""Testimonials, stats to reference
{
"ad_sequence": [
{
"phase": 1,
"phase_name": "Awareness",
"day_range": "Day 1-2",
"strategy": "Introduce the writing consistency problem without mentioning the product",
"ads": [
{
"platform": "instagram",
"variant": "A",
"headline": "You write every day. But do you publish?",
"body": "Most solopreneurs have 47 drafts and zero published pieces...",
"cta": "Learn Why",
"emotional_lever": "recognition",
"visual_suggestion": "Split screen: overflowing notes app vs. empty blog"
}
]
},
{
"phase": 2,
"phase_name": "Consideration",
"day_range": "Day 3-5",
"strategy": "Show how the product solved the exact problem from Phase 1",
"ads": [...]
},
{
"phase": 3,
"phase_name": "Conversion",
"day_range": "Day 6-7",
"strategy": "Time-bound offer with direct CTA and friction removal",
"ads": [...]
}
]
}

Workflow: video/research-to-shorts

Generates a viral short video where every statistic and claim is grounded in real web research. Unlike ai-shorts where the LLM invents freely, this workflow scrapes web sources first and constrains the script to verified facts.

Terminal window
# Minimal — topic only
fab-workflow video/research-to-shorts --input topic="why developers are switching to Rust"
# Full control
fab-workflow video/research-to-shorts \
--input topic="why developers are switching to Rust" \
--input mood="informed, slightly opinionated, conversational" \
--input platform="YouTube" \
--input max_sources=8 \
--input duration_secs=60 \
--input bgm_volume=0.10 \
--input caption_color="#00FF88"
web.research (DuckDuckGo scrape, 300-char truncation)
ai.generate (fact-grounded script)
├── ai.generate (portrait)
├── ai.generate (voiceover)
├── ai.generate (b-roll × 3)
└── music.generate (BGM)
talking head → lipsync
transcribe → subtitles (ASS + SRT)
audio.mix (ducking)
ffmpeg.composite → video.effects → hook-overlay
ParameterTypeDefaultDescription
topicstringrequiredTopic to research and create a video about (also accepts query)
moodstring"high-energy and conversational"Tone and delivery style
platformstring"TikTok"Target platform
duration_secsint45Target video length
presenter_lookstring"confident young creator..."AI actor appearance description
visual_stylestring""Override visual aesthetic (e.g. “neon cyberpunk”)
qualitystring""Quality preset: cheap, premium, ultra, local
bgm_volumefloat0.15Background music volume (0.0–1.0)
num_hooksint15Number of hooks to generate before selecting the best

The web.research node scrapes web sources and truncates each snippet to 300 characters. This serves two purposes:

  1. Factual grounding — the script generation prompt includes "Every statistic MUST come from the verified facts above", constraining the LLM to real data
  2. Prompt injection defense — truncation limits the attack surface from scraped web content

The script output includes a fact_sources field listing which URLs were used, enabling source attribution.

The pipeline produces a single .mp4 file at 1080x1920 (9:16 vertical) with burned-in subtitles, hook text overlay, and mixed audio. An SRT sidecar file is also generated for platform-native captions.

ai-shortsresearch-to-short
Script groundingLLM invents freelyWeb research with attribution
Hallucination riskHighLow
Caption formatsASS + SRTASS + SRT
Caption stylingParameterizedParameterized
Color gradingYesYes

Workflow: global/content-modify

Takes existing content (text, image, audio, or video) and a natural-language prompt, then rewrites or redesigns the content accordingly. Supports preservation constraints to keep specific aspects unchanged.

Terminal window
# Rewrite text content
fab-workflow global/content-modify \
--input content="Your blog post text here..." \
--input prompt="Make it more casual and add humor" \
--input preserve='["key_points", "length"]'
# Modify an image
fab-workflow global/content-modify \
--input content=photo.jpg \
--input prompt="Add dramatic lighting and warmer tones" \
--input content_type=image
# Adjust audio
fab-workflow global/content-modify \
--input content=voiceover.mp3 \
--input prompt="Speed it up 1.5x"
# Edit video
fab-workflow global/content-modify \
--input content=video.mp4 \
--input prompt="Cut the first 10 seconds"
analyze_content → apply_modifications → evaluate_and_refine

Content type is auto-detected from the input (inline text vs. file extension), or can be overridden with content_type.

TypeInput FormatModification Capabilities
textInline text stringFull LLM rewrite with tone, style, structure changes
imageFile path or URL (.jpg, .png, etc.)Vision analysis → regeneration with modifications
audioFile path or URL (.mp3, .wav, etc.)Speed, trim, re-voice with modified script
videoFile path or URL (.mp4, .mov, etc.)Trim, speed, filters via ffmpeg

The preserve parameter accepts a list of aspects to keep unchanged:

ConstraintEffect
toneKeep the same writing tone/voice
lengthMaintain approximate word count
structureKeep the same organizational structure
stylePreserve formatting and stylistic choices
key_pointsKeep the same core arguments/ideas

For text content, the workflow evaluates the modification against the original prompt and scores fidelity, quality, and preservation compliance. If the score falls below 7/10, it automatically refines (up to 2 additional passes).

ParameterTypeDefaultDescription
contentstringrequiredThe content to modify (inline text or file path/URL)
promptstringrequiredModification instructions
content_typestringauto-detected"text", "image", "audio", or "video"
preservelist[str][]Aspects to keep unchanged
{
"content_type": "text",
"original_summary": "Blog post about remote work productivity...",
"modified_content": "The full rewritten text...",
"modifications_applied": [
"Shifted tone from formal to conversational",
"Added humor through self-deprecating observations",
"Replaced jargon with plain language"
],
"fidelity_score": 8.5
}

Workflow: content/generate

Analyzes a topic, generates structured content (blog posts, articles, emails), and evaluates quality.

Terminal window
fab-workflow content/generate \
--input topic="The future of remote work" \
--input content_type="blog post" \
--input audience="tech professionals"
analyze_topic → generate_content → evaluate_quality
ParameterTypeDefaultDescription
topicstringrequiredContent topic
audiencestring""Target audience
content_typestring"blog post"Format: blog post, email, social, video script
{
"generated_content": {
"title": "...",
"content": "Full article text...",
"meta_description": "...",
"tags": ["remote-work", "productivity"]
},
"evaluation": {
"overall_score": 8.5,
"scores": {"clarity": 9, "engagement": 8, "seo": 7, "originality": 9, "actionability": 8},
"improvements": ["Add more specific data points"],
"ready_to_publish": true
}
}

Workflow: content/long-form-script

Elite long-form scriptwriting with 6 narrative archetypes. Generates multi-act scripts with retention hooks, pacing guidance, and tone direction.

Terminal window
fab-workflow content/long-form-script \
--input topic="The hidden cost of technical debt" \
--input archetype="documentary" \
--input duration_minutes=10
fab-workflow content/long-form-script \
--input topic="How one developer built a $10M SaaS" \
--input archetype="hero_journey"
ArchetypeStructureBest For
documentarySetup → evidence → synthesis → implicationsAnalytical, research-heavy topics
hero_journeyOrigin → challenge → transformation → lessonPersonal or founder stories
true_crimeMystery → investigation → revelation → aftermathInvestigative, expose-style content
listicleHook → items with escalation → callbackRanked or numbered content
problem_solutionProblem → failed attempts → solution → proofTutorial and how-to content
explainerQuestion → context → mechanism → so-whatEducational deep-dives
research_topic → select_structure → generate_acts → score_retention
ParameterTypeDefaultDescription
topicstringrequiredScript topic
archetypestringauto-selectedNarrative archetype
duration_minutesint8Target video length
moodstring""Tone direction

Workflow: global/build-in-public

Transforms GitHub repo activity into authentic “build in public” social media posts. Fetches README, releases, and recent commits via GitHub API, extracts compelling story angles using 8 archetypes, and generates platform-native posts with a quality gate.

Terminal window
fab-workflow global/build-in-public \
--input repo_url="https://github.com/astral-sh/ruff"
fab-workflow global/build-in-public \
--input repo_url="https://github.com/astral-sh/ruff" \
--input author_name="Charlie" \
--input author_context="building a Python linter in Rust" \
--input platforms="twitter,linkedin,threads" \
--input time_range_days=14 \
--input num_posts=5
ArchetypeSignal PatternsVoice Example
progress_updateMultiple feat: commits, new release”Shipped X this week. Here’s why it matters…”
til_momentRefactors, dependency changes, perf improvements”TIL: I always assumed X but turns out Y…”
milestoneVersion bump, v1.0/v2.0 release”Just hit [milestone]. Started this [timeframe] ago…”
behind_the_scenesArchitecture changes, new modules”Everyone asks how we handle X. Here’s the actual architecture…”
struggle_winFix sequences, reverts then re-implements”Spent 3 days on X. The fix? One line…”
before_afterLarge refactors, delete-heavy commits”Before: [old way]. After: [new way]…”
hot_takeUnconventional tech choices”Unpopular opinion: [decision]. Here’s why…”
lessons_learnedReverts, major refactors, deprecations”If I could start over, I’d change [thing]…”
validate_input → fork(fetch_readme, fetch_releases, fetch_commits) →
merge_github_data → extract_story_angles → generate_posts →
quality_gate → format_output
ParameterTypeDefaultDescription
repo_urlstringrequiredGitHub repository URL
platformsstring"twitter,linkedin,threads,tiktok,youtube"Target platforms (comma-separated)
voicestring"conversational"Writing voice
github_tokenstringenvGitHub API token (higher rate limits)
time_range_daysint30How far back to look at commits
num_postsint5Number of posts to generate
author_namestring""Author name for personal touch
author_contextstring""What the author is building

Workflow: global/url-to-assets

Scrapes any URL (article, blog, GitHub repo, YouTube video, Reddit post) and generates multiple content asset types: social posts, video scripts, hook variants, and captions.

Terminal window
fab-workflow global/url-to-assets \
--input url="https://example.com/article" \
--input platforms=instagram,twitter
fab-workflow global/url-to-assets \
--input url="https://youtube.com/watch?v=..." \
--input asset_types=script,hooks,posts
scrape_url → extract_content_angles → generate_assets

Auto-detects content type from URL patterns:

URL PatternDetected Type
youtube.com/watch, youtu.be/YouTube (extracts transcript + metadata)
github.com/GitHub repo
twitter.com/, x.com/Twitter/X
reddit.com/Reddit post
Everything elseArticle (HTML readability extraction)
ParameterTypeDefaultDescription
urlstringrequiredURL to scrape
platformsstring or list["instagram", "twitter"]Target platforms for generated assets
asset_typesstring or list["posts", "hooks", "script"]Types of assets to generate

Workflow: global/formula-extract

Analyzes reference videos to extract reusable style DNA as a VideoFormula. Extracts keyframes, probes media metadata, transcribes audio, then runs 4 parallel analysis branches (visuals, audio, script, effects) using Gemini vision. When given multiple videos, synthesizes a consensus formula.

Terminal window
# Single video
fab-workflow global/formula-extract \
--input url="https://youtube.com/watch?v=..." \
-o output/formula.json
# Multiple videos (consensus extraction)
fab-workflow global/formula-extract \
--input 'urls=["https://youtube.com/watch?v=...", "https://tiktok.com/@user/video/..."]' \
-o output/formula.json
download_media → probe_media → transcribe →
fork(analyze_visuals, analyze_audio, analyze_script, analyze_effects) →
merge_analysis → synthesize_formula

For multiple videos, the single_video_analysis sub-pipeline runs per video, then results are merged into a consensus formula.

The formula JSON captures every dimension of video style:

SectionContents
scriptHook type, tone, emotional trigger, CTA style, word choice patterns, segment structure
visualsOverall aesthetic, camera work, lighting, color palette, film stock match
audioVoice (gender, style), music (mood, volume level)
effectsSubtitle colors/position/size, color grade (FFmpeg hints), hook overlay style

Use the output with Formula Shorts to generate new videos in the extracted style.