Getting Started
This guide gets you from install to a first working workflow as quickly as possible.
- Install the package
- Configure one provider connection
- Initialize ProviderPlane for workflow execution
- Define typed workflow steps
- Run your first workflow successfully
Install ProviderPlane
Start by installing the package into your Node.js application.
npm install providerplaneai- Node.js 20+
- TypeScript 5+
Configure one provider connection
Create or update `config/default.json` in your application with a minimal ProviderPlane config. Define one named provider connection, then make that connection the default entry in the provider chain.
{
"providerplane": {
"appConfig": {
"executionPolicy": {
"providerChain": [
{ "providerType": "openai", "connectionName": "default" }
]
},
...
},
"providers": {
"openai": {
"default": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_1",
"defaultModel": "gpt-5"
}
},
...
}
}
}This is a partial example. The `...` markers show config omitted for clarity.
Add the API key to the environment
Add the matching API key to your application's `.env` file.
OPENAI_API_KEY_1=your_openai_api_keyThe connection looks up its secret through "apiKeyEnvVar", so credentials stay in the environment instead of the JSON config.
Build your first workflow
This example builds a small three-step workflow: generate text, convert it to speech, and transcribe the result. It is simple, but it shows the main workflow pattern clearly.
import {
AIClient,
MultiModalExecutionContext,
Pipeline,
WorkflowRunner
} from "providerplaneai";
const client = new AIClient();
const runner = new WorkflowRunner({ jobManager: client.jobManager, client });
const ctx = new MultiModalExecutionContext();
const pipeline = new Pipeline<{
generatedText: string;
transcriptText: string;
audioArtifactId: string;
}>("readme-workflow-1", {});
const generateText = pipeline.step("generateText");
const tts = pipeline.step("tts");
const transcribe = pipeline.step("transcribe");
const workflow = pipeline
.chat(generateText.id, "Generate one short inspirational quote in French.", {
normalize: "text"
})
.tts(tts.id, { voice: "alloy", format: "mp3" }, { source: generateText })
.transcribe(transcribe.id, { responseFormat: "text" }, { source: tts, normalize: "text" })
.output((values) => ({
generatedText: String(values.generateText ?? ""),
transcriptText: String(values.transcribe ?? ""),
audioArtifactId: String(((values.tts as any[])?.[0]?.id ?? ""))
}))
.build();
const execution = await runner.run(workflow, ctx);
console.log("Output", execution.output);Workflow shape
- Typed step handles keep source and after references readable and safe.
- For most applications, this is the right level to start from before reaching for lower-level APIs.
- If everything is wired correctly, the final output includes the generated text, the transcript text, and the artifact id for the synthesized audio.
