v0.18.01122662.8 MB
MIT
strict
core24
Get up and running with large language models, locally.
Ollama – Local AI Model Runner
Run, manage, and switch between a wide range of open‑source LLMs directly on your local machine. Ollama provides fast, offline inference with a simple CLI and API, ensuring your data never leaves the device. Ideal for developers, researchers, and anyone who wants powerful AI without cloud dependencies.
Run, manage, and switch between a wide range of open‑source LLMs directly on your local machine. Ollama provides fast, offline inference with a simple CLI and API, ensuring your data never leaves the device. Ideal for developers, researchers, and anyone who wants powerful AI without cloud dependencies.
Update History
v0.17.4 (109) → v0.18.0 (112)18 Mar 2026, 13:33 UTC
v0.15.1 (105) → v0.17.4 (109)10 Mar 2026, 20:33 UTC
9 Feb 2024, 19:35 UTC
17 Mar 2026, 20:23 UTC
13 Dec 2025, 09:47 UTC