benchmarks/tts-quality
TTS quality benchmark — compare local vs remote voiceover models.
Category: benchmarks
Source: workflows/benchmarks/tts_quality.py
Input Schema
Section titled “Input Schema”| Field | Type | Default | Description |
|---|---|---|---|
models | string[] | [] | Subset of models to benchmark. Empty = all available. |
narration | string | "" | Generated narration text (populated by generate_script task). |
regenerate | object | — | When set, this run is a regeneration. Workflows may read direction / keep / extra_instructions to modulate prompts; the engine persists parent_run_id and parent_variant_index as run lineage columns. |
script | string | "" | Direct narration text. If empty, a script is generated from the topic. |
topic | string | "The future of artificial intelligence and its impact on creative industries" | Topic to generate a narration script about (ignored if script is provided). |
variants | integer | 1 | Number of independent variant executions (1–10). When > 1, the engine runs the workflow N times with different sampling, producing N outputs. |
voice_gender | string | "female" | Voice gender: male or female. |
Output Schema
Section titled “Output Schema”| Field | Type | Default | Description |
|---|---|---|---|
kind | object | — | Variant card shape: video / carousel / image / text. Surfaced on the per-variant entry of the run-output API and used by gallery UIs to pick the right layout. |
output_dir | string | "" | Directory containing audio files. |
results | object[] | — | Per-model benchmark results. |
summary | object | — | Aggregated comparison summary. |
Task Pipeline
Section titled “Task Pipeline”generate_script → benchmark_tts_models| Task | Description |
|---|---|
generate_script | Generate or use a provided narration script (~60 seconds when spoken). |
benchmark_tts_models | Run TTS generation with each model and collect results. |
Run-spec example
Section titled “Run-spec example”Save the YAML below as my-run.yaml, edit the values, and run with the CLI or POST it to the API. Required fields are uncommented; optional knobs are documented above the input: block — copy any line under input: and uncomment to set.
workflow: benchmarks/tts-quality
# Optional fields — copy any line(s) under `input:` and uncomment to set:# Subset of models to benchmark. Empty = all available.# models: []## Generated narration text (populated by generate_script task).# narration: ""## Direct narration text. If empty, a script is generated from the topic.# script: ""## Topic to generate a narration script about (ignored if script is provided).# topic: The future of artificial intelligence and its impact on creative industries## Voice gender: male or female.# voice_gender: female#
input: {}Run it locally:
fab-workflow --from-file my-run.yamlOr submit over the wire — the same file is the request body:
curl -X POST 'https://gofabric.dev/v1/workflows/runs?name=benchmarks/tts-quality' \ -H 'Authorization: Bearer fab_xxx' \ -H 'content-type: application/yaml' \ --data-binary @my-run.yamlEvery workflow also accepts the universal WorkflowInput fields — variants (1–10 fan-out) and regenerate (creative-direction hints with run lineage). See Run-specs (YAML / TOML / JSON) for the full top-level shape (metadata, priority, bundle, parent, etc.).