Skip to content

Submitting Jobs

A job is a uniquely identifiable client-facing unit of work. Every job has a canonical job_id and maps to a single-node workflow under the hood.

import { FabricClient } from "@fabric-platform/sdk";
const fabric = new FabricClient({ apiKey: "fab_xxx" });
const job = await fabric.createJob({
modality: "text",
tier: "basic",
input: { prompt: "Summarize this article" },
params: { temperature: 0.5 },
organizationId: "<org-id>",
});
console.log(job.id, job.status);

The response includes the canonical job_id you can use to track the job:

{
"data": {
"job_id": "job_01HXYZ...",
"status": "pending",
"workflow_run_id": "run_01HXYZ..."
}
}

Jobs progress through these states:

  1. pending — Created, waiting for execution
  2. started — Executor has claimed the job
  3. completed — Finished successfully with output
  4. failed — Execution failed after all retries
// Get a specific job
const job = await fabric.getJob("<job-id>");
// List all jobs
const jobs = await fabric.listJobs();
// Get job usage/cost
const usage = await fabric.getJobUsage("<job-id>");

Fabric supports idempotent job submission to prevent duplicate work from retries or network failures.

Include an idempotency_key in the request:

const job = await fabric.createJob({
modality: "text",
input: { prompt: "Generate a summary" },
organizationId: "<org-id>",
idempotencyKey: "client-request-abc-123",
});

Duplicate submissions with the same idempotency key return the original resource.

Before submitting a job, you can estimate its cost:

const estimate = await fabric.estimateCost({
modality: "text",
model: "gpt-4",
input: { prompt: "Generate a summary" },
});

Local models (Ollama, Whisper) always return $0.00.

Subscribe to real-time events for a specific job:

await fabric.streamEvents("<job-id>", (event) => {
console.log(event.kind, event.payload);
});

Job event types:

EventDescription
job.createdJob was created
job.startedExecutor began processing
job.completedJob finished with output
job.failedJob execution failed

The per-job event stream replays all existing events on connect (catch-up), then streams live events as they occur.

Jobs support these modalities, routed to the appropriate provider:

ModalityDescriptionExample Providers
textText generation and completionOpenAI, Anthropic, Ollama
imageImage generationOpenAI (DALL-E), ComfyUI
audioAudio transcriptionWhisper
embeddingVector embeddingsOpenAI, Ollama (nomic-embed-text)

Specify a tier preference to control provider selection:

  • basic — Local/free providers (Ollama, Whisper, Echo)
  • premium — Remote/paid providers (OpenAI, Anthropic)

If no tier is specified, the router selects by cost (cheapest first), then health status.