Examples
WebLLM (browser-only)
Browser-only chat — the LLM runs 100% in the browser via WebGPU + @mlc-ai/web-llm. No API key, no server-side inference, no telemetry.
The LLM runs 100% in the browser via WebGPU + @mlc-ai/web-llm. No API key, no server-side inference, no telemetry.
Demonstrates the webllm adapter from @agentskit/adapters driving the standard useChat hook from @agentskit/react.
pnpm --filter @agentskit/example-webllm devPersistent memory survives page reload via localStorage.
#Related
Explore nearby
- PeerExamples
Interactive demos. For copy-paste code, see Recipes.
- PeerBasic Chat
The simplest use case — streaming AI conversation with auto-scroll, stop button, and keyboard handling. All in 10 lines with AgentsKit.
- PeerTool Use
AI assistants that call functions — weather, search, DB queries. Tool calls render as expandable cards.