1.01unset
strict
core22
Local LLM REST API using llama-server and Qwen 0.5B
A fully automated deployment of a lightweight, CPU-only Large Language Model.
This snap packages llama.cpp's server with Qwen2.5-0.5B to expose an
OpenAI-compatible REST API.
This snap packages llama.cpp's server with Qwen2.5-0.5B to expose an
OpenAI-compatible REST API.
Update History
1.0 (1)30 Apr 2026, 14:15 UTC
30 Apr 2026, 14:13 UTC
30 Apr 2026, 14:05 UTC
30 Apr 2026, 14:15 UTC