React Integration
Use AvatarLayer in React apps with AvatarProvider, useAvatarSession, AvatarView, and useMic.
AvatarLayer ships with React bindings via the avatarlayer/react subpath export. These give you a context-based API that manages the session lifecycle automatically.
Components and hooks
| Export | Type | Description |
|---|---|---|
AvatarProvider | Component | Creates and manages an AvatarSession in context |
useAvatarSession | Hook | Access session state, messages, and actions |
AvatarView | Component | Renders the avatar by mounting the renderer into a div |
useMic | Hook | Manage browser microphone capture |
Basic setup
import {
AvatarProvider,
useAvatarSession,
AvatarView,
} from "avatarlayer/react";
import {
OpenAIAdapter,
ElevenLabsAdapter,
VRMLocalRenderer,
} from "avatarlayer";
function App() {
const config = {
llm: new OpenAIAdapter({ apiKey: "..." }),
tts: new ElevenLabsAdapter({ apiKey: "..." }),
renderer: new VRMLocalRenderer({ modelUrl: "/models/avatar.vrm" }),
systemPrompt: "You are a helpful assistant.",
};
return (
<AvatarProvider config={config}>
<div style={{ display: "flex", height: "100vh" }}>
<AvatarView style={{ flex: 1 }} />
<Chat />
</div>
</AvatarProvider>
);
}AvatarProvider
Wraps your app (or a subtree) and creates an AvatarSession internally.
<AvatarProvider config={sessionConfig}>
{children}
</AvatarProvider>Props:
| Prop | Type | Description |
|---|---|---|
config | AvatarSessionConfig | Session configuration (llm, tts, renderer, systemPrompt, etc.) |
The session is created once when the provider mounts. To swap providers at runtime, use setLLM and setTTS from the hook — don't recreate the provider.
useAvatarSession
Returns the session context. Must be used inside an <AvatarProvider>.
function Chat() {
const {
state, // SessionState
messages, // ChatMessage[]
sendMessage, // (text: string, opts?: SendMessageOptions) => void
interrupt, // () => void
setLLM, // (llm: LLMProvider) => void
setTTS, // (tts: TTSProvider) => void
session, // AvatarSession | null
mount, // (container: HTMLElement) => void
listening, // boolean
startListening, // (source: AsyncIterable<Float32Array>) => Promise<void>
stopListening, // (opts?: { drain?: boolean }) => void
videoActive, // boolean
startVideo, // (stream: MediaStream) => void
stopVideo, // () => void
visionWorkloadsActive, // boolean
visionContext, // VisionContextEntry[]
startVisionWorkloads, // () => void
stopVisionWorkloads, // () => void
} = useAvatarSession();
return (
<div>
<div>State: {state}</div>
{messages.map((msg) => (
<div key={msg.id}>
<strong>{msg.role}:</strong> {typeof msg.content === "string" ? msg.content : "..."}
</div>
))}
<button onClick={() => sendMessage("Hello!")}>Send</button>
<button onClick={interrupt}>Interrupt</button>
</div>
);
}For thread management (switchThread, newThread) or other advanced APIs not exposed on the context, access the raw session via session.
AvatarView
A div that automatically mounts the renderer when placed in the tree.
<AvatarView
style={{ width: 640, height: 480 }}
className="rounded-lg overflow-hidden"
/>AvatarView accepts all standard div HTML attributes (style, className, etc.). It:
- Creates a ref to an internal
<div> - Calls
mount(container)from context when the component mounts - The renderer (VRM canvas, video element, etc.) is attached inside this div
useMic
Manages browser microphone capture. Returns start/stop functions that create a MicCapture instance and wire it to startListening/stopListening.
import { useMic } from "avatarlayer/react";
function VoiceButton() {
const { listening } = useAvatarSession();
const { startMic, stopMic } = useMic();
return (
<button onClick={listening ? stopMic : startMic}>
{listening ? "Stop" : "Talk"}
</button>
);
}useMic accepts an optional options object:
useMic({
captureOptions: {
sampleRate: 16000, // default 16000
bufferSize: 4096, // default 4096
},
})Runtime provider swaps
function Settings() {
const { setLLM, setTTS } = useAvatarSession();
const switchToAnthropic = () => {
setLLM(new AnthropicAdapter({
apiKey: "...",
model: "claude-sonnet-4.6",
}));
};
const switchVoice = () => {
setTTS(new ElevenLabsAdapter({
apiKey: "...",
voiceId: "new-voice-id",
}));
};
return (
<>
<button onClick={switchToAnthropic}>Use Anthropic</button>
<button onClick={switchVoice}>Change Voice</button>
</>
);
}