Providers
LLM Adapters
Configure language model adapters — OpenAI, Anthropic, Gemini, and 10 more.
import { OpenAIAdapter } from "avatarlayer";
const llm = new OpenAIAdapter({
apiKey: "sk-...",
model: "gpt-5.4-mini", // optional, defaults to gpt-5.4-mini
baseURL: "https://...", // optional, for proxies or compatible APIs
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | OpenAI API key |
model | string | "gpt-5.4-mini" | Model identifier |
baseURL | string | OpenAI default | Base URL for API-compatible endpoints |
For reasoning models (o1, o3, etc.), set reasoningEffort on the session:
const session = new AvatarSession({
llm,
tts,
renderer,
reasoningEffort: "medium",
});
This maps to OpenAI's reasoning_effort parameter.
import { AnthropicAdapter } from "avatarlayer";
const llm = new AnthropicAdapter({
apiKey: "sk-ant-...",
model: "claude-sonnet-4.6", // optional
baseURL: "https://...", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Anthropic API key |
model | string | "claude-sonnet-4.6" | Model identifier |
baseURL | string | Anthropic default | Base URL for API-compatible endpoints |
When reasoningEffort is set, the Anthropic adapter enables extended thinking with a budget:
| Effort | Budget tokens |
|---|
"low" | 4,096 |
"medium" | 10,000 |
"high" | 32,000 |
Temperature is automatically disabled when thinking is enabled (Anthropic requirement).
import { GeminiAdapter } from "avatarlayer";
const llm = new GeminiAdapter({
apiKey: "...",
model: "gemini-3-flash-preview", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Google AI API key |
model | string | "gemini-3-flash-preview" | Model identifier |
import { AzureOpenAIAdapter } from "avatarlayer";
const llm = new AzureOpenAIAdapter({
apiKey: "...",
endpoint: "https://my-resource.openai.azure.com",
deployment: "gpt-4o",
apiVersion: "2025-03-01-preview", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Azure OpenAI API key |
endpoint | string | required | Azure resource endpoint URL |
deployment | string | required | Deployment name |
apiVersion | string | "2025-03-01-preview" | Azure API version |
model | string | deployment name | Model identifier (defaults to deployment) |
import { GroqAdapter } from "avatarlayer";
const llm = new GroqAdapter({
apiKey: "...",
model: "llama-4-scout-17b-16e-instruct", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Groq API key |
model | string | "llama-4-scout-17b-16e-instruct" | Model identifier |
import { DeepSeekAdapter } from "avatarlayer";
const llm = new DeepSeekAdapter({
apiKey: "...",
model: "deepseek-chat", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | DeepSeek API key |
model | string | "deepseek-chat" | Model identifier |
import { MistralAdapter } from "avatarlayer";
const llm = new MistralAdapter({
apiKey: "...",
model: "mistral-small-latest", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Mistral API key |
model | string | "mistral-small-latest" | Model identifier |
import { XAIAdapter } from "avatarlayer";
const llm = new XAIAdapter({
apiKey: "...",
model: "grok-3-mini-fast", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | xAI API key |
model | string | "grok-3-mini-fast" | Model identifier |
import { OpenRouterAdapter } from "avatarlayer";
const llm = new OpenRouterAdapter({
apiKey: "...",
model: "openai/gpt-4.1-mini", // optional
referer: "https://myapp.com", // optional
title: "My App", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | OpenRouter API key |
model | string | "openai/gpt-4.1-mini" | Model identifier (provider/model format) |
referer | string | — | HTTP referer for OpenRouter analytics |
title | string | — | App title for OpenRouter analytics |
import { TogetherAdapter } from "avatarlayer";
const llm = new TogetherAdapter({
apiKey: "...",
model: "meta-llama/Llama-4-Scout-17B-16E-Instruct", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Together API key |
model | string | "meta-llama/Llama-4-Scout-17B-16E-Instruct" | Model identifier |
import { FireworksAdapter } from "avatarlayer";
const llm = new FireworksAdapter({
apiKey: "...",
model: "accounts/fireworks/models/llama4-scout-instruct-basic", // optional
});
| Option | Type | Default | Description |
|---|
apiKey | string | required | Fireworks API key |
model | string | "accounts/fireworks/models/llama4-scout-instruct-basic" | Model identifier |
import { OllamaAdapter } from "avatarlayer";
const llm = new OllamaAdapter({
baseURL: "http://localhost:11434/v1", // optional
model: "llama3.2", // optional
});
No API key needed — Ollama runs locally.
| Option | Type | Default | Description |
|---|
baseURL | string | "http://localhost:11434/v1" | Ollama server URL |
model | string | "llama3.2" | Model identifier |
import { PromptAPIAdapter } from "avatarlayer";
const llm = new PromptAPIAdapter();
Uses Chrome's built-in LanguageModel API. No API key needed — runs entirely in the browser.
if (await PromptAPIAdapter.supported()) {
const llm = new PromptAPIAdapter();
}
| Static method | Returns | Description |
|---|
PromptAPIAdapter.supported() | Promise<boolean> | Whether the browser supports the Prompt API |
PromptAPIAdapter.availability() | Promise<string> | Availability status ("available", "downloadable", etc.) |
All adapters implement this interface. Implement it to add your own LLM:
interface LLMProvider {
readonly id: string;
chat(messages: ChatMessage[], opts?: LLMOptions): AsyncIterable<LLMChunk>;
}
interface LLMChunk {
text: string;
done: boolean;
}
interface ChatMessage {
role: "system" | "user" | "assistant";
content: MessageContent;
id?: string;
timestamp?: number;
}
type MessageContent = string | ContentPart[];
type ContentPart = TextContentPart | ImageContentPart;
interface TextContentPart { type: "text"; text: string }
interface ImageContentPart { type: "image"; image: string }
See Custom Adapters for a full implementation example.