Configuration
This page explains the shape of the ProviderPlane config tree and the few sections that matter most early on: the "providerplane" namespace, the default provider chain, the provider connections that back that chain, and the environment variables used for credentials.
Use the providerplane namespace
ProviderPlane only reads from the "providerplane" namespace. That keeps library configuration separate from the rest of the application's config tree.
{
"providerplane": {
"appConfig": {...},
"providers": {...},
...
}
}Define the default provider chain
The "providerChain" defines which providers ProviderPlane will try, and in what order. Each entry points at one provider and one named connection, and ProviderPlane will attempt them in sequence until one succeeds or the entire chain fails.
{
"providerplane": {
"appConfig": {
...,
"executionPolicy": {
"providerChain": [
{ "providerType": "openai", "connectionName": "default" },
{ "providerType": "anthropic", "connectionName": "default" }
]
}
},
...
}
}- Each entry must point to a provider connection that exists under "providerplane.providers", which is shown in the next example.
- ProviderPlane starts with the first entry in the array, then moves to the next only if the earlier attempt fails.
- You can place providers in any order, and you can include the same provider more than once by using different connection names.
- Every connection listed in the default "providerChain" is initialized when the ProviderPlane client starts up.
- If the entire chain fails, ProviderPlane throws an exception.
Define provider connections
The "providers" section is where you define the named provider connections used by the provider chain. Think of each connection as one reusable provider profile: it tells ProviderPlane which provider to call, which API key to use, which model to treat as the baseline, and how to adjust that baseline for specific capabilities.
{
"providerplane": {
...,
"providers": {
"openai": {
"default": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_1",
"defaultModel": "gpt-5",
"defaultModels": {
"audioTts": "gpt-4o-mini-tts",
"audioTranscription": "gpt-4o-transcribe"
},
"models": {
"gpt-5": {
"chatStream": {
"generalParams": {
"chatStreamBatchSize": 64
}
}
}
}
},
"fallback": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_2",
"defaultModel": "gpt-5"
}
},
...
}
}
}- Each named connection is a reusable configuration that the provider chain can point at. In the example above, OpenAI has both a "default" connection and a "fallback" connection.
- "defaultModel" is the baseline for that connection. If nothing more specific is configured, ProviderPlane uses that model for calls made through the connection.
- "defaultModels" lets the connection swap in a different model for specific capabilities. That is how one connection can use one model for chat and another for transcription, TTS, embeddings, or image work.
- "models" is where you go deeper. It lets you attach capability-specific settings to one model, such as provider API parameters in "modelParams" or ProviderPlane behavior in "generalParams".
- "apiKeyEnvVar" connects the named connection to an environment variable in your application's `.env` file, so the provider chain can choose between connections without hard-coding secrets in config.
- In practice, most people start with one named connection per provider and only add fallback connections when they actually need them.
- These connection-level defaults are still overridable. If one request needs different behavior, you can supply that override at the call site.
Set credentials through environment variables
Put provider credentials in your application's `.env` file. Each connection points at an environment variable through "apiKeyEnvVar", which lets the same provider use different credentials for different named connections.
# OpenAI
OPENAI_API_KEY_1=xxx
OPENAI_API_KEY_2=yyy
# Anthropic
ANTHROPIC_API_KEY_1=xxx
ANTHROPIC_API_KEY_2=yyy
VOYAGE_API_KEY=xxx
# Gemini
GEMINI_API_KEY_1=xxx- At least one provider must be defined under "providerplane.providers".
- Each connection must declare "apiKeyEnvVar".
- If the referenced environment variable is missing, config loading fails immediately.
- If two connections point at different environment variable names, they can use different credentials even when they target the same provider.
The config files in this repository are for development and examples. They are not shipped with the published package.
Full working example
This example is based on the repository default config. It shows a provider chain with an OpenAI default and fallback connection, followed by Gemini and Anthropic. Use it as a starting point, then trim it down to the providers and capabilities your application actually uses. The OpenAI "default" connection is shown in more detail, while the later provider sections are intentionally truncated.
{
"providerplane": {
"appConfig": {
"maxConcurrency": 128,
"maxQueueSize": 1024,
"maxRemoteImageBytes": 10485760,
"maxStoredResponseChunks": 1024,
"storeRawResponses": true,
"stripBinaryPayloadsInSnapshotsAndTimeline": true,
"maxRawBytesPerJob": 1048576,
"remoteImageFetchTimeoutMs": 16384,
"executionPolicy": {
"providerChain": [
{ "providerType": "openai", "connectionName": "default" },
{ "providerType": "openai", "connectionName": "fallback" },
{ "providerType": "gemini", "connectionName": "default" },
{ "providerType": "anthropic", "connectionName": "default" }
]
}
},
"providers": {
"openai": {
"default": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_1",
"defaultModel": "gpt-5",
"defaultModels": {
"chat": "gpt-5",
"embed": "text-embedding-3-large",
"imageGeneration": "gpt-4.1",
"audioTranscription": "gpt-4o-transcribe",
"audioTranslation": "whisper-1",
"audioTts": "gpt-4o-mini-tts",
"videoGeneration": "sora-2",
"moderation": "omni-moderation-latest"
}
},
"fallback": {
"type": "openai",
"apiKeyEnvVar": "OPENAI_API_KEY_2",
"defaultModel": "gpt-5"
}
},
"anthropic": {
"default": {
"type": "anthropic",
"apiKeyEnvVar": "ANTHROPIC_API_KEY_1",
"defaultModel": "claude-sonnet-4-5-20250929"
}
},
"gemini": {
"default": {
"type": "gemini",
"apiKeyEnvVar": "GEMINI_API_KEY_1",
"defaultModel": "gemini-2.5-flash-lite"
}
}
}
}
}