Renderers

Live2D

Render 2D animated avatars using Live2D Cubism models.

The Live2DRenderer renders a Live2D Cubism model in the browser. It supports automatic blink, pointer tracking, expression presets, and both RMS-based and viseme-based lip-sync.

Usage

import { Live2DRenderer } from "avatarlayer";

const renderer = new Live2DRenderer({
  modelUrl: "/models/avatar/model.json",
  frameMode: "upperBodyFocus",  // optional
  autoBlink: true,               // optional
  autoTrackPointer: false,       // optional
  visemeLipSync: false,          // optional
});

Constructor options

OptionTypeDefaultDescription
modelUrlstringrequiredURL to the Live2D model JSON file
vendorBasestringBase URL for the Live2D Cubism SDK vendor files
frameModeLive2DFrameMode"upperBodyFocus"Camera framing mode
autoBlinkbooleantrueEnable automatic blinking
autoTrackPointerbooleanfalseTrack mouse/touch pointer with eye gaze
visemeLipSyncbooleanEnable viseme-based lip-sync instead of RMS amplitude

Frame modes

ModeDescription
"upperBodyFocus"Camera frames the upper body and face

How it works

When mounted, the renderer:

  1. Loads the Live2D Cubism SDK from the vendor base URL
  2. Loads the model from the provided URL
  3. Starts an automatic blink loop (if autoBlink is enabled)
  4. Optionally tracks pointer position for eye gaze
  5. Begins a render loop at screen refresh rate

When speak(audio) is called:

  1. The audio blob is played through an <audio> element
  2. Audio is analyzed in realtime for lip-sync — either via RMS amplitude or viseme weights
  3. The mouth parameters on the Live2D model are updated each frame
  4. The promise resolves when the audio ends

Avatar control

The Live2D renderer responds to update() calls for expression and emotion control:

session.updateControl({
  avatar: {
    emotion: {
      label: "happy",
      intensity: 0.8,
      valence: 0.7,
      arousal: 0.5,
    },
  },
});

Supported expressions are mapped via EMOTION_LIVE2D_MAP: happy, sad, angry, surprised, relaxed, neutral.