Getting Started

Install AvatarLayer and run your first conversational avatar session.

Installation

Install the core SDK:

npm install avatarlayer

Then install the peer dependencies for whichever providers you plan to use:

# LLM providers (pick one or more)
npm install openai                    # OpenAI, Groq, DeepSeek, Mistral, xAI, OpenRouter, Together, Fireworks
npm install @anthropic-ai/sdk         # Anthropic
npm install @google/generative-ai     # Google Gemini

# Local 3D avatar (VRM)
npm install three @pixiv/three-vrm

# Remote video avatars (LemonSlice / Atlas)
npm install livekit-client

Many LLM providers (Groq, DeepSeek, Mistral, xAI, OpenRouter, Together, Fireworks, Ollama) use OpenAI-compatible APIs. The openai package covers all of them. See LLM Adapters for the full list.

Core usage

The fundamental pattern is: create adapters, create a session, mount it, and send messages.

import {
  AvatarSession,
  OpenAIAdapter,
  ElevenLabsAdapter,
  VRMLocalRenderer,
} from "avatarlayer";

const llm = new OpenAIAdapter({
  apiKey: "sk-...",
  model: "gpt-5.4-mini",
});

const tts = new ElevenLabsAdapter({
  apiKey: "...",
  voiceId: "21m00Tcm4TlvDq8ikWAM",
});

const renderer = new VRMLocalRenderer({
  modelUrl: "/models/avatar.vrm",
});

const session = new AvatarSession({
  llm,
  tts,
  renderer,
  systemPrompt: "You are a helpful avatar assistant.",
});

session.on("state-change", (state) => console.log("State:", state));
session.on("message", (msg) => console.log(`${msg.role}: ${msg.content}`));
session.on("speech-start", () => console.log("Speaking..."));
session.on("speech-end", () => console.log("Done speaking"));
session.on("error", (err) => console.error(err));

await session.start(document.getElementById("avatar-container")!);

await session.sendMessage("Hello! Tell me about yourself.");

Interruption

Cancel the current pipeline at any point:

session.interrupt();

This aborts the LLM stream, any in-flight TTS request, and stops the avatar from speaking — returning the session to the ready state.

Runtime provider swaps

Swap providers without restarting the session:

import { AnthropicAdapter } from "avatarlayer";

session.setLLM(new AnthropicAdapter({
  apiKey: "...",
  model: "claude-sonnet-4.6",
}));

session.setTTS(new ElevenLabsAdapter({
  apiKey: "...",
  voiceId: "different-voice-id",
}));

Adding voice input

Enable realtime speech-to-text so users can talk to the avatar:

import {
  AvatarSession,
  OpenAIAdapter,
  ElevenLabsAdapter,
  VRMLocalRenderer,
  DeepgramSTTAdapter,
  MicCapture,
} from "avatarlayer";

const session = new AvatarSession({
  llm: new OpenAIAdapter({ apiKey: "sk-..." }),
  tts: new ElevenLabsAdapter({ apiKey: "..." }),
  renderer: new VRMLocalRenderer({ modelUrl: "/models/avatar.vrm" }),
  realtimeSTT: new DeepgramSTTAdapter({ apiKey: "..." }),
  voice: { bargeIn: true },
});

await session.start(document.getElementById("avatar-container")!);

const mic = new MicCapture();
await mic.start();
await session.startListening(mic);

See Voice Input for the full voice pipeline guide.

Adding memory

Persist conversations across sessions:

import {
  AvatarSession,
  OpenAIAdapter,
  ElevenLabsAdapter,
  VRMLocalRenderer,
  LocalStorageThreadProvider,
} from "avatarlayer";

const session = new AvatarSession({
  llm: new OpenAIAdapter({ apiKey: "sk-..." }),
  tts: new ElevenLabsAdapter({ apiKey: "..." }),
  renderer: new VRMLocalRenderer({ modelUrl: "/models/avatar.vrm" }),
  memory: {
    provider: new LocalStorageThreadProvider(),
    maxMessages: 50,
  },
});

See Memory for thread providers, semantic recall, and thread management.

Cleanup

When you're done, destroy the session to unmount the renderer and clean up all resources:

session.destroy();

React quick start

If you're building a React app, use the built-in bindings instead of wiring up the session manually:

import { AvatarProvider, useAvatarSession, AvatarView } from "avatarlayer/react";

function App() {
  const config = {
    llm: new OpenAIAdapter({ apiKey: "..." }),
    tts: new ElevenLabsAdapter({ apiKey: "..." }),
    renderer: new VRMLocalRenderer({ modelUrl: "/models/avatar.vrm" }),
  };

  return (
    <AvatarProvider config={config}>
      <AvatarView style={{ width: 640, height: 480 }} />
      <Chat />
    </AvatarProvider>
  );
}

function Chat() {
  const { messages, state, sendMessage, interrupt } = useAvatarSession();
  // Build your chat UI using messages, state, sendMessage, interrupt
}

See React Integration for full details.