Providers
noumen supports OpenAI, Anthropic, and Google Gemini as AI providers, all behind a unified interface.
noumen is provider-agnostic. Every provider implements the AIProvider interface, which accepts OpenAI-compatible chat parameters and returns an async iterable of streaming chunks. You can swap providers without changing any application code.
Supported providers
OpenAI
GPT-4o, GPT-4.1, and any OpenAI-compatible API
Anthropic
Claude Sonnet, Opus, and Haiku models
Google Gemini
Gemini 2.5 Flash and Pro models
The AIProvider interface
All providers implement this interface:
interface AIProvider {
chat(params: ChatParams): AsyncIterable<ChatStreamChunk>;
}The ChatParams type includes:
| Field | Type | Description |
|---|---|---|
model | string | Model identifier |
messages | ChatMessage[] | Conversation history |
tools | ToolDefinition[] | Available tool definitions |
max_tokens | number | Maximum output tokens |
system | string | System prompt |
temperature | number | Sampling temperature |
Each provider internally converts these to their native SDK format and normalizes the streaming output back to a common ChatStreamChunk shape.
Token usage
All three providers populate a usage field on the final streaming chunk:
interface ChatCompletionUsage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}This is automatically captured by the Thread and emitted as usage and turn_complete stream events. See Stream Events for details.