Benchmark Workflows
Benchmark workflows compare local and remote AI models side-by-side, producing quality reports with timing metrics, file sizes, and output files for manual inspection. Use them to evaluate provider quality before committing to a production pipeline.
TTS Quality
Section titled “TTS Quality”Workflow: benchmarks/tts-quality
Generates the same narration with every available TTS model, producing side-by-side audio files and a quality summary.
# Auto-generate a script from a topicfab-workflow benchmarks/tts-quality --input topic="The future of AI"
# Use a custom scriptfab-workflow benchmarks/tts-quality --input script="Your custom narration text here..."
# Benchmark specific models onlyfab-workflow benchmarks/tts-quality \ --input topic="AI agents" \ --input 'models=["elevenlabs-turbo", "kokoro-local"]'curl -X POST "$FABRIC_URL/v1/workflows/run?name=benchmarks/tts-quality" \ -H "Authorization: Bearer $FABRIC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "input": { "topic": "The future of AI", "models": ["elevenlabs-turbo", "kokoro-local"], "voice_gender": "female" } }'Models Benchmarked
Section titled “Models Benchmarked”| Name | Model ID | Local |
|---|---|---|
elevenlabs-turbo | elevenlabs/eleven_turbo_v2_5 | No |
fal-kokoro | fal-ai/kokoro/american-english | No |
kokoro-local | kokoro | Yes |
piper | piper | Yes |
chatterbox | fal-ai/chatterbox | No |
| Parameter | Type | Default | Description |
|---|---|---|---|
topic | string | "The future of AI..." | Topic to generate narration about |
script | string | "" | Direct narration text (skips generation) |
models | list[str] | [] | Subset of models to benchmark (empty = all) |
voice_gender | string | "female" | Voice gender |
Output
Section titled “Output”Produces individual audio files per model plus a JSON report with generation time and realtime factor metrics.
Lipsync Quality
Section titled “Lipsync Quality”Workflow: benchmarks/lipsync-quality
Compares local vs remote lipsync models by applying the same video + audio pair across all providers.
fab-workflow benchmarks/lipsync-quality \ --input video_path=./portrait.mp4 \ --input audio_path=./narration.mp3curl -X POST "$FABRIC_URL/v1/workflows/run?name=benchmarks/lipsync-quality" \ -H "Authorization: Bearer $FABRIC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "input": { "video_path": "https://example.com/portrait.mp4", "audio_path": "https://example.com/narration.mp3" } }'Models Benchmarked
Section titled “Models Benchmarked”- VEED (remote)
- FAL (remote)
- WAV2Lip (local)
Output
Section titled “Output”Side-by-side video outputs plus a JSON report with timing and file size metrics for each provider.
Video Generation Quality
Section titled “Video Generation Quality”Workflow: benchmarks/video-quality
Compares video generation models by producing 4-second clips with identical prompts across all providers.
fab-workflow benchmarks/video-quality \ --input prompt="A cat sitting on a windowsill watching rain"curl -X POST "$FABRIC_URL/v1/workflows/run?name=benchmarks/video-quality" \ -H "Authorization: Bearer $FABRIC_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "input": { "prompt": "A cat sitting on a windowsill watching rain" } }'Models Benchmarked
Section titled “Models Benchmarked”- FAL Veo (remote)
- Seedance (remote)
- Gemini Veo (remote)
- WAN (remote/local)
Output
Section titled “Output”Individual video clips per model plus a JSON report with resolution, FPS, duration, and generation time for each.