We have hosted the application wllama in order to run this application in our online workstations with Wine or directly.


Quick description about wllama:

wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user�s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. The framework provides both high-level APIs for common tasks such as text generation and embeddings, as well as low-level APIs that expose tokenization, sampling controls, and model state management.

Features:
  • WebAssembly binding that enables llama.cpp inference inside browsers
  • Local execution of large language models without server infrastructure
  • High-level APIs for text completion and embeddings generation
  • Low-level control over tokenization, sampling, and model caching
  • Support for GGUF model format and parallel model loading
  • TypeScript integration for building modern web AI applications


Programming Language: TypeScript.
Categories:
Large Language Models (LLM)

Page navigation:

©2024. Winfy. All Rights Reserved.

By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.