Renderers
Live2D
Render 2D animated avatars using Live2D Cubism models.
The Live2DRenderer renders a Live2D Cubism model in the browser. It supports automatic blink, pointer tracking, expression presets, and both RMS-based and viseme-based lip-sync.
Usage
import { Live2DRenderer } from "avatarlayer";
const renderer = new Live2DRenderer({
modelUrl: "/models/avatar/model.json",
frameMode: "upperBodyFocus", // optional
autoBlink: true, // optional
autoTrackPointer: false, // optional
visemeLipSync: false, // optional
});Constructor options
| Option | Type | Default | Description |
|---|---|---|---|
modelUrl | string | required | URL to the Live2D model JSON file |
vendorBase | string | — | Base URL for the Live2D Cubism SDK vendor files |
frameMode | Live2DFrameMode | "upperBodyFocus" | Camera framing mode |
autoBlink | boolean | true | Enable automatic blinking |
autoTrackPointer | boolean | false | Track mouse/touch pointer with eye gaze |
visemeLipSync | boolean | — | Enable viseme-based lip-sync instead of RMS amplitude |
Frame modes
| Mode | Description |
|---|---|
"upperBodyFocus" | Camera frames the upper body and face |
How it works
When mounted, the renderer:
- Loads the Live2D Cubism SDK from the vendor base URL
- Loads the model from the provided URL
- Starts an automatic blink loop (if
autoBlinkis enabled) - Optionally tracks pointer position for eye gaze
- Begins a render loop at screen refresh rate
When speak(audio) is called:
- The audio blob is played through an
<audio>element - Audio is analyzed in realtime for lip-sync — either via RMS amplitude or viseme weights
- The mouth parameters on the Live2D model are updated each frame
- The promise resolves when the audio ends
Avatar control
The Live2D renderer responds to update() calls for expression and emotion control:
session.updateControl({
avatar: {
emotion: {
label: "happy",
intensity: 0.8,
valence: 0.7,
arousal: 0.5,
},
},
});Supported expressions are mapped via EMOTION_LIVE2D_MAP: happy, sad, angry, surprised, relaxed, neutral.