ProviderPlaneAI is a workflow-first AI orchestration framework for Node.js. It provides a provider-agnostic workflow layer above raw model SDKs:
See providerplane.dev for guides, examples, configuration, changelog, and API reference. See providerplane.ai for the main project site.
npm install providerplaneai
ProviderPlaneAI loads configuration via node-config + dotenv.
Create config/default.json (or environment-specific config files) with a providerplane section containing appConfig and providers.
Minimal example:
{
"providerplane": {
"appConfig": {
"executionPolicy": {
"providerChain": [
{ "providerType": "openai", "connectionName": "default" }
]
}
},
"providers": {
"openai": {
"default": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_1",
"defaultModel": "gpt-5"
}
}
}
}
}
Minimal .env for the config above:
OPENAI_API_KEY_1=your_openai_api_key
For full multi-provider config and environment examples covering OpenAI, Gemini, Anthropic, and Voyage, see providerplane.dev.
import {
AIClient,
MultiModalExecutionContext,
Pipeline,
WorkflowRunner
} from "providerplaneai";
const client = new AIClient();
const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
const ctx = new MultiModalExecutionContext();
const pipeline = new Pipeline<{
generatedText: string;
transcriptText: string;
audioArtifactId: string;
}>("readme-workflow-1", {});
// Typed step handles keep `source` and `after` references readable and safe
const generateText = pipeline.step("generateText");
const tts = pipeline.step("tts");
const transcribe = pipeline.step("transcribe");
// Build a workflow: chat -> tts -> transcribe
const workflow = pipeline
.chat(generateText.id, "Generate one short inspirational quote in French.", {
normalize: "text"
})
.tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
.transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
.output((values) => ({
generatedText: String(values.generateText ?? ""),
transcriptText: String(values.transcribe ?? ""),
audioArtifactId: String(((values.tts as any[])?.[0]?.id ?? ""))
}))
.build();
// Run the workflow
const execution = await runner.run(workflow, ctx);
console.log("Output", execution.output);
graph TD
n0["generateText"]
n1["tts"]
n2["transcribe"]
n0 --> n1
n1 --> n2
For most applications, this is the right abstraction level: the workflow layer via Pipeline plus WorkflowRunner.
Use direct jobs only when you need low-level control outside a workflow DAG, are integrating with an external scheduler, or are building custom orchestration on top of the library.
Providers listed in appConfig.executionPolicy.providerChain are initialized automatically when AIClient is constructed.
ProviderPlaneAI includes a DAG workflow engine for orchestrating multi-step AI workflows. Pipeline is the recommended authoring API. WorkflowBuilder remains available for advanced node-level control.
sourcewhenPipeline for most workflowsWorkflowRunner for executionWorkflowExporter for visualization and exportWorkflowBuilder for advanced custom graph constructionconst client = new AIClient();
const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
const ctx = new MultiModalExecutionContext();
const pipeline = new Pipeline<{
generatedText: string;
transcriptText: string;
translationText: string;
moderationFlagged: boolean;
}>("readme-workflow-2", {});
// Typed step handles keep `source` and `after` references readable and safe
const generateText = pipeline.step("generateText");
const tts = pipeline.step("tts");
const transcribe = pipeline.step("transcribe");
const translate = pipeline.step("translate");
const moderate = pipeline.step("moderate");
// Build a workflow: chat -> tts -> transcribe + translate -> moderate
const workflow = pipeline
.chat(generateText.id, "Generate one short inspirational quote in French.", { normalize: "text" })
.tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
.transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
.translate(translate.id, { targetLanguage: "english", responseFormat: "text" }, { source: tts, normalize: "text" })
.moderate(moderate.id, {}, { source: [transcribe, translate] })
.output((values) => ({
generatedText: String(values.generateText ?? ""),
transcriptText: String(values.transcribe ?? ""),
translationText: String(values.translate ?? ""),
moderationFlagged: Boolean((values.moderate as any)?.[0]?.flagged ?? false)
}))
.build();
// Run the workflow
const execution = await runner.run(workflow, ctx);
console.log("Output", execution.output);
Notes:
source binds step input to upstream output and can be either a single step or an array of steps.after adds ordering dependencies when you need sequencing without data binding.pipeline.step("...") reduce stringly-typed wiring mistakes.custom(...) and customAfter(...) are escape hatches for custom capability steps without dropping to WorkflowBuilder.createCapabilityJob in application code, you are usually below the preferred abstraction level.graph TD
n0["generateText"]
n1["tts"]
n2["transcribe"]
n3["translate"]
n4["moderate"]
n0 --> n1
n1 --> n2
n1 --> n3
n2 --> n4
n3 --> n4
For the full Pipeline method reference and step-by-step DSL documentation, see providerplane.dev.
approvalGatesaveFileThese are registered by default and are intended for workflow authoring rather than provider-specific model calls.
WorkflowBuilder when you need direct node functions or full control over graph construction.WorkflowExporter to render workflows as Mermaid, DOT, D3, or JSON.Pipeline for the common path.npm run build
npm run test
npm run lint
npm run perf:quick
For integration testing, PR title conventions, release workflow notes, and contribution guidance, see CONTRIBUTING.md.
MIT