Creates a new OpenAI chat capability implementation.
Owning provider instance
Initialized OpenAI SDK client
Executes a non-streaming chat request using OpenAI Responses API.
Unified AI chat request
Optional_executionContext: MultiModalExecutionContextOptional execution context
Optionalsignal: AbortSignalOptional abort signal
AIResponse containing the output
Executes a streaming chat request using OpenAI Responses API.
Streams incremental response chunks as they are received from OpenAI.
Each chunk is wrapped as a NormalizedChatMessage for both delta (partial)
and output (accumulated full text). Chunks are emitted in batches to
smooth UI updates and reduce downstream backpressure.
Unified AI chat request
Optional_executionContext: MultiModalExecutionContextOptional execution context
Optionalsignal: AbortSignalOptional abort signal
Async iterable emitting AIResponseChunk objects
OpenAIChatCapabilityImpl: Implements OpenAI Responses API chat functionality.
Responsibilities:
This capability is stateless with respect to session and turn lifecycle. Continuation, turn management, and multimodal state are owned by AIClient.
Template: TChatInput
Client chat request input type
Template: TChatOutput
Chat output type