WebLlmConfig
Auto-generated API reference for WebLlmConfig.
Interface: WebLlmConfig
Defined in: webllm.ts:14
Browser-only adapter backed by WebLLM (https://github.com/mlc-ai/web-llm). Models run on-device via WebGPU; no network for inference. The MLCEngine is loaded lazily on first stream so apps can ship the import without paying the wasm cost up front.
@mlc-ai/web-llm is an optional peer dependency β install it
alongside this package when you opt into browser-only inference.
#Properties
#engine?
optionalengine?:WebLlmEngineLike
Defined in: webllm.ts:21
Override the engine to inject a pre-loaded one (the MLCEngine spin-up is non-trivial β apps usually warm it once, not per turn).
#model
model:
string
Defined in: webllm.ts:16
Model id from MLC's catalog, e.g. Llama-3.1-8B-Instruct-q4f16_1-MLC.
#onProgress?
optionalonProgress?: (info) =>void
Defined in: webllm.ts:23
Engine progress callback (model download / compile percent).
#Parameters
info
progress
number
text
string
#Returns
void