Examples

These examples are ordered from the highest-level workflow abstraction down to the most direct client calls.

Workflow example

This is the preferred way to build with ProviderPlane. It uses the workflow layer directly and shows fan-out, fan-in, approval, and a custom capability that runs after approval.

const client = new AIClient();
const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
const ctx = new MultiModalExecutionContext();

client.registerCapabilityExecutor("customApprovalSummary", {
  streaming: false,
  async invoke(_provider, request: any) {
    const status = String(request?.input?.status ?? "");
    const approver = String(request?.input?.approver ?? "unknown");
    const flagged = Boolean(request?.input?.moderationFlagged);
    const summary = `approval=${status}; approver=${approver}; moderationFlagged=${flagged}`;

    return {
      output: summary,
      rawResponse: { summary, status, approver, flagged },
      id: `customApprovalSummary-${Date.now()}`,
      metadata: {}
    };
  }
});

const workflowId = "workflow-example";

const pipeline = new Pipeline<{
  quoteText: string;
  transcriptText: string;
  translationText: string;
  moderationFlagged: boolean;
  approvalStatus: string;
  approvalReason: string;
  approver: string;
  approvalSummary: string;
}>(workflowId, {});

const generateQuote = pipeline.step("generateQuote");
const tts = pipeline.step("tts");
const transcribe = pipeline.step("transcribe");
const translate = pipeline.step("translate");
const moderate = pipeline.step("moderate");
const approval = pipeline.step("approval");
const approvalSummary = pipeline.step("approvalSummary");

const workflow = pipeline
  .chat(generateQuote.id, "Generate an inspirational quote in French")
  .tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateQuote })
  .transcribe(transcribe.id, { responseFormat: "text" }, { source: tts })
  .translate(translate.id, { targetLanguage: "english", responseFormat: "text" }, { source: tts })
  .moderate(moderate.id, {}, { source: [transcribe, translate] })
  .approvalGate(
    approval.id,
    {
      input: (values) => ({
        requestedAt: new Date().toISOString(),
        decision: {
          status: "approved",
          reason: "README example auto-approved",
          approver: "system"
        },
        draft: extractAssistantText(values.generateQuote),
        transcript: normalizeTranscriptText(values.transcribe),
        translation: extractAssistantText(values.translate),
        moderationFlagged: isModerationFlagged(values.moderate)
      })
    },
    { after: moderate }
  )
  .customAfter(approval, approvalSummary.id, "customApprovalSummary", (_ctx, state) => ({
    input: {
      status: String((state.values.approval as any)?.status ?? ""),
      approver: String((state.values.approval as any)?.approver ?? ""),
      moderationFlagged: isModerationFlagged(state.values.moderate)
    }
  }))
  .output((values) => ({
    quoteText: extractAssistantText(values.generateQuote),
    transcriptText: normalizeTranscriptText(values.transcribe),
    translationText: extractAssistantText(values.translate),
    moderationFlagged: isModerationFlagged(values.moderate),
    approvalStatus: String((values.approval as any)?.status ?? ""),
    approvalReason: String((values.approval as any)?.reason ?? ""),
    approver: String((values.approval as any)?.approver ?? ""),
    approvalSummary: String(values.approvalSummary ?? "")
  }))
  .build();

const execution = await runner.run(workflow, ctx);

console.log("Workflow example status:", execution.status);
console.log("Workflow example output:", execution.output);

Workflow shape

  • This is the recommended abstraction level for most application code.
  • The workflow fans out after TTS into transcription and translation, then joins those results at moderation.
  • "approvalGate" models a human or policy approval step directly in the workflow.
  • "customApprovalSummary" shows how to add application-specific logic without dropping out of the workflow layer.
  • "extractAssistantText", "normalizeTranscriptText", and "isModerationFlagged" are omitted for brevity. They are ordinary application helpers, not workflow-specific ProviderPlane APIs.
  • This example is a good reference point for how ProviderPlane handles explicit workflow shape, custom logic, and multi-step execution in one place.

Persistence and resume example

This workflow example shows persistence and resume in the workflow layer. The first run fails intentionally and persists an execution snapshot, and the next run resumes from that snapshot instead of starting over.

const runtimeName = "live-persistence-resume-docs";
const { client, runner, shouldResumeWorkflow } = createFileBackedWorkflowRuntime(runtimeName);
const ctx = new MultiModalExecutionContext();

// This custom capability simulates a gate that fails on the first invocation
// and succeeds on subsequent invocations.
client.registerCapabilityExecutor("customFailOnceGate", {
  streaming: false,
  async invoke(_provider, request: any) {
    const runtimeDir = path.join(WORKFLOW_RUNTIME_ROOT_DIR, runtimeName);
    const failMarkerPath = path.join(runtimeDir, "fail-once.marker");
    mkdirSync(path.dirname(failMarkerPath), { recursive: true });
    const inputValue = String(request?.input?.value ?? "");

    if (!existsSync(failMarkerPath)) {
      writeFileSync(failMarkerPath, String(Date.now()), "utf8");
      throw new Error("intentional-fail-once-gate");
    }

    return {
      output: `${inputValue}-gate-passed`,
      rawResponse: { inputValue, gate: "passed" },
      id: `customFailOnceGate-${Date.now()}`,
      metadata: { gate: "passed" }
    };
  }
});

const workflowId = "live-persistence-resume-example-docs-workflow";
const pipeline = new Pipeline<{
  quoteText: string;
  gatedText: string;
  finalText: string;
}>(workflowId, {});

// Override provider chain so this step does not fall back to other
// provider/connection combinations during the fail-once simulation.
const providerChain = [{ providerType: AIProvider.OpenAI, connectionName: "default" }];

const generateQuote = pipeline.step("generateQuote");
const failOnceGate = pipeline.step("failOnceGate");
const echoResult = pipeline.step("echoResult");

const workflow = pipeline
  .defaults({
    providerChain,
    retry: { attempts: 1 },
    timeoutMs: 25000
  })
  .chat(generateQuote.id, "Generate a quote about cats")
  .customAfter(
    generateQuote,
    failOnceGate.id,
    "customFailOnceGate",
    (_ctx, state) => ({
      input: {
        value: extractAssistantText(state.values[generateQuote.id])
      }
    }),
    { providerChain }
  )
  .chat(
    echoResult.id,
    (values) => `Echo this exactly:\n${String(values[failOnceGate.id] ?? "")}-finalized`,
    { after: failOnceGate }
  )
  .output((values) => ({
    quoteText: extractAssistantText(values[generateQuote.id]),
    gatedText: String(values[failOnceGate.id] ?? ""),
    finalText: extractAssistantText(values[echoResult.id])
  }))
  .build();

const resumeMode = shouldResumeWorkflow(workflow.id);

try {
  const execution: WorkflowExecution<{
    quoteText: string;
    gatedText: string;
    finalText: string;
  }> = resumeMode ? await runner.resume(workflow, ctx) : await runner.run(workflow, ctx);

  console.log("Live persistence resume docs workflow output:", execution.output);
  return execution;
} catch (error) {
  console.log(
    "Live persistence resume docs first pass failed as expected (run again to resume):",
    error instanceof Error ? error.message : String(error)
  );
  throw error;
}

Workflow shape

  • "createFileBackedWorkflowRuntime(...)" is user-defined support code. In this example it wires in file-backed persistence and resume, but the same pattern could use any backend data store. That includes the "shouldResumeWorkflow(...)" decision, which is also defined by the user.
  • "extractAssistantText(...)" is ordinary application helper code, not part of the persistence mechanism itself.
  • The first run writes the fail marker and throws, which leaves a persisted workflow snapshot behind.
  • A later run resumes from that unfinished snapshot and continues with only the remaining steps.
  • This is a workflow example, not a separate persistence subsystem. The persistence hooks support the workflow layer rather than replacing it.

Job example

This is the middle layer. It runs one capability through the job system instead of building a workflow graph. Use it when you need finer execution control without dropping all the way down to direct client calls.

console.log("=== OpenAI TTS example ===");

const jobManager = new JobManager();

const client = new AIClient(jobManager);
const ctx = new MultiModalExecutionContext();

const request: ClientTextToSpeechRequest = {
  text: "Hello from ProviderPlaneAI. This is a Responses API text to speech example.",
  voice: "alloy",
  format: "mp3"
};

const job = client.createCapabilityJob<"audioTts", ClientTextToSpeechRequest, NormalizedAudio[]>(
  "audioTts",
  { input: request }
);

jobManager.runJob(job.id, ctx);
const result = await job.getCompletionPromise();
const audio = result?.[0];

if (!audio?.base64) {
  throw new Error("No TTS audio bytes returned.");
}

console.log("[OpenAITTS] result:", JSON.stringify(result, null, 2));
  • This is a single-job example on purpose.
  • This example skips the workflow layer and runs one capability job directly.
  • "createCapabilityJob(...)" builds the job, and "jobManager.runJob(...)" queues and executes it.
  • The result is still normalized by ProviderPlane, so the returned audio payload follows the library's common output shape.

Direct client example

This is the lowest public layer. It calls one capability directly through the client, without jobs or a workflow graph, while still using the configured provider chain.

const client = new AIClient();
const ctx = new MultiModalExecutionContext();

const result = await client.chat(
  {
    input: {
      messages: [
        {
          role: "user",
          content: [{ type: "text", text: "Explain quantum computing in 4 lines." }]
        }
      ]
    }
  },
  ctx
);

console.log(result.output);
  • This is the least abstracted public layer.
  • The configured provider chain still applies, so ProviderPlane still handles provider selection and fallback.
  • Use this when you want a single capability call without jobs or a workflow graph.
  • Once calls start depending on each other, move up to jobs or the workflow layer.