Renderers

Atlas

Realtime lip-synced video avatars by North Model Labs.

AtlasRenderer connects to North Model Labs Atlas for realtime lip-synced video avatars. Unlike LemonSlice, Atlas manages its own LiveKit room — you only need an Atlas API key (no separate LiveKit credentials).

Installation

npm install livekit-client

Usage

import { AtlasRenderer } from "avatarlayer";

const renderer = new AtlasRenderer({
  createSession: async () => {
    const resp = await fetch("/api/atlas", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        atlasApiKey: "...",
        faceUrl: "https://...",  // optional face image
      }),
    });
    return resp.json();
  },
  deleteSession: async (sessionId) => {
    await fetch(`/api/atlas/${sessionId}`, {
      method: "DELETE",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ atlasApiKey: "..." }),
    });
  },
  onVideoStream: (stream) => console.log("Video:", stream),
  onStateChange: (state) => console.log("State:", state),
});

Constructor options

OptionTypeDescription
createSession() => Promise<AtlasSession>Required. Returns LiveKit connection info from Atlas.
deleteSession(sessionId: string) => Promise<void>Recommended. Tears down the session to stop billing.
onVideoStream(stream: MediaStream | null) => voidCalled when video track is received/lost
onAudioStream(stream: MediaStream | null) => voidCalled when audio track is received/lost
onStateChange(state: string) => voidCalled on connection state changes

AtlasSession shape

interface AtlasSession {
  sessionId: string | null;
  livekitUrl: string;
  token: string;
  room: string;
  mode: string | null;
}

How it works (passthrough mode)

Atlas uses passthrough mode — your TTS audio drives the avatar:

  1. mount() calls createSession to get LiveKit credentials from Atlas
  2. Connects to the LiveKit room
  3. Creates a WebAudio pipeline: AudioContextMediaStreamAudioDestinationNode
  4. Publishes the destination stream as a LiveKit audio track
  5. When speak(audio) is called, the audio blob is decoded and played through the WebAudio pipeline
  6. Atlas receives the audio track and renders lip-synced video in realtime

Session lifecycle

Always delete sessions

Atlas sessions are billed while active. Always call deleteSession when done, or let unmount() handle it automatically.

When unmount() is called, the renderer:

  1. Stops any current audio playback
  2. Disconnects from the LiveKit room
  3. Closes the AudioContext
  4. Calls deleteSession if a session ID exists