Renderers

Renderers

Overview of avatar renderer types — local 3D, local 2D, and remote video.

AvatarLayer supports two categories of avatar renderers:

  • Local — Models rendered in the browser (VRM via Three.js, Live2D)
  • Remote — Video streams from cloud avatar services (LemonSlice, Atlas, HeyGen)

All renderers implement the same AvatarRenderer interface, so you can swap between a local VRM model and a remote video avatar without changing any other code.

AvatarRenderer interface

interface AvatarRenderer {
  readonly id: string;
  readonly type: "local" | "remote";
  mount(container: HTMLElement): Promise<void>;
  update(control: Partial<AvatarControl>): void;
  speak(audio: Blob): Promise<void>;
  speakText?(text: string, signal?: AbortSignal): Promise<void>;
  interrupt(): void;
  unmount(): void;
}
MethodDescription
mount(container)Attach the renderer to a DOM element. For local renderers this creates a canvas; for remote renderers this establishes a LiveKit connection and attaches video.
update(control)Update avatar state (face, emotion, body, scene). Local renderers apply these directly; remote renderers typically ignore them.
speak(audio)Play an audio blob through the avatar with lip-sync. This is the standard path for TTS audio.
speakText(text)Optional. When present, AvatarSession skips external TTS and calls this instead (used by HeyGen which handles TTS internally).
interrupt()Stop any current speech and clear buffers.
unmount()Tear down the renderer and release all resources.

Built-in renderers

RendererTypeUse case
VRM (local)local3D avatar in the browser via Three.js + @pixiv/three-vrm
Live2Dlocal2D animated avatar using Live2D Cubism
LemonSliceremoteRemote video avatar via LiveKit
AtlasremoteRemote lip-synced video avatar by North Model Labs
HeyGenremoteRemote streaming avatar with built-in TTS
VideoRendererlocalPassthrough renderer for a MediaStream video source

Choosing a renderer