Providers

LLM Adapters

Configure language model adapters — OpenAI, Anthropic, Gemini, and 10 more.

OpenAI

import { OpenAIAdapter } from "avatarlayer";

const llm = new OpenAIAdapter({
  apiKey: "sk-...",
  model: "gpt-5.4-mini",     // optional, defaults to gpt-5.4-mini
  baseURL: "https://...",     // optional, for proxies or compatible APIs
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredOpenAI API key
modelstring"gpt-5.4-mini"Model identifier
baseURLstringOpenAI defaultBase URL for API-compatible endpoints

Reasoning effort

For reasoning models (o1, o3, etc.), set reasoningEffort on the session:

const session = new AvatarSession({
  llm,
  tts,
  renderer,
  reasoningEffort: "medium",
});

This maps to OpenAI's reasoning_effort parameter.


Anthropic

import { AnthropicAdapter } from "avatarlayer";

const llm = new AnthropicAdapter({
  apiKey: "sk-ant-...",
  model: "claude-sonnet-4.6",  // optional
  baseURL: "https://...",       // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredAnthropic API key
modelstring"claude-sonnet-4.6"Model identifier
baseURLstringAnthropic defaultBase URL for API-compatible endpoints

Extended thinking

When reasoningEffort is set, the Anthropic adapter enables extended thinking with a budget:

EffortBudget tokens
"low"4,096
"medium"10,000
"high"32,000

Temperature is automatically disabled when thinking is enabled (Anthropic requirement).


Gemini

import { GeminiAdapter } from "avatarlayer";

const llm = new GeminiAdapter({
  apiKey: "...",
  model: "gemini-3-flash-preview",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredGoogle AI API key
modelstring"gemini-3-flash-preview"Model identifier

Azure OpenAI

import { AzureOpenAIAdapter } from "avatarlayer";

const llm = new AzureOpenAIAdapter({
  apiKey: "...",
  endpoint: "https://my-resource.openai.azure.com",
  deployment: "gpt-4o",
  apiVersion: "2025-03-01-preview",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredAzure OpenAI API key
endpointstringrequiredAzure resource endpoint URL
deploymentstringrequiredDeployment name
apiVersionstring"2025-03-01-preview"Azure API version
modelstringdeployment nameModel identifier (defaults to deployment)

Groq

import { GroqAdapter } from "avatarlayer";

const llm = new GroqAdapter({
  apiKey: "...",
  model: "llama-4-scout-17b-16e-instruct",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredGroq API key
modelstring"llama-4-scout-17b-16e-instruct"Model identifier

DeepSeek

import { DeepSeekAdapter } from "avatarlayer";

const llm = new DeepSeekAdapter({
  apiKey: "...",
  model: "deepseek-chat",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredDeepSeek API key
modelstring"deepseek-chat"Model identifier

Mistral

import { MistralAdapter } from "avatarlayer";

const llm = new MistralAdapter({
  apiKey: "...",
  model: "mistral-small-latest",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredMistral API key
modelstring"mistral-small-latest"Model identifier

xAI (Grok)

import { XAIAdapter } from "avatarlayer";

const llm = new XAIAdapter({
  apiKey: "...",
  model: "grok-3-mini-fast",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredxAI API key
modelstring"grok-3-mini-fast"Model identifier

OpenRouter

import { OpenRouterAdapter } from "avatarlayer";

const llm = new OpenRouterAdapter({
  apiKey: "...",
  model: "openai/gpt-4.1-mini",  // optional
  referer: "https://myapp.com",   // optional
  title: "My App",                // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredOpenRouter API key
modelstring"openai/gpt-4.1-mini"Model identifier (provider/model format)
refererstringHTTP referer for OpenRouter analytics
titlestringApp title for OpenRouter analytics

Together

import { TogetherAdapter } from "avatarlayer";

const llm = new TogetherAdapter({
  apiKey: "...",
  model: "meta-llama/Llama-4-Scout-17B-16E-Instruct",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredTogether API key
modelstring"meta-llama/Llama-4-Scout-17B-16E-Instruct"Model identifier

Fireworks

import { FireworksAdapter } from "avatarlayer";

const llm = new FireworksAdapter({
  apiKey: "...",
  model: "accounts/fireworks/models/llama4-scout-instruct-basic",  // optional
});

Options

OptionTypeDefaultDescription
apiKeystringrequiredFireworks API key
modelstring"accounts/fireworks/models/llama4-scout-instruct-basic"Model identifier

Ollama

import { OllamaAdapter } from "avatarlayer";

const llm = new OllamaAdapter({
  baseURL: "http://localhost:11434/v1",  // optional
  model: "llama3.2",                     // optional
});

No API key needed — Ollama runs locally.

Options

OptionTypeDefaultDescription
baseURLstring"http://localhost:11434/v1"Ollama server URL
modelstring"llama3.2"Model identifier

Chrome Prompt API

import { PromptAPIAdapter } from "avatarlayer";

const llm = new PromptAPIAdapter();

Uses Chrome's built-in LanguageModel API. No API key needed — runs entirely in the browser.

if (await PromptAPIAdapter.supported()) {
  const llm = new PromptAPIAdapter();
}
Static methodReturnsDescription
PromptAPIAdapter.supported()Promise<boolean>Whether the browser supports the Prompt API
PromptAPIAdapter.availability()Promise<string>Availability status ("available", "downloadable", etc.)

LLMProvider interface

All adapters implement this interface. Implement it to add your own LLM:

interface LLMProvider {
  readonly id: string;
  chat(messages: ChatMessage[], opts?: LLMOptions): AsyncIterable<LLMChunk>;
}

interface LLMChunk {
  text: string;
  done: boolean;
}

interface ChatMessage {
  role: "system" | "user" | "assistant";
  content: MessageContent;
  id?: string;
  timestamp?: number;
}

type MessageContent = string | ContentPart[];
type ContentPart = TextContentPart | ImageContentPart;

interface TextContentPart { type: "text"; text: string }
interface ImageContentPart { type: "image"; image: string }

See Custom Adapters for a full implementation example.