# Ollama > Run open-source LLMs locally — no network, no API key, no telemetry. **Canonical URL:** https://www.xmr.club/ai/ollama **Category:** ai / Local Runtime **Grade (xmr.club rubric):** A **KYC posture:** anonymous_signup **Features:** open_source, self_hosted, cli_supported, api_available **Highlights:** FREE, LOCAL, NO-NETWORK **Fees:** Free · MIT · runtime + model weights local **Website:** https://ollama.com **Last verified:** 2026-05-12 **Uptime probe:** up (HTTP 200, 318ms) · checked 2026-05-13T23:04:43.181Z ## Editorial review The reference local-LLM runtime. Pulls quantised model weights, exposes an OpenAI-compatible API on localhost. Works on CPU + GPU + Apple Silicon. The strongest privacy posture available because the inference never leaves your machine. ## Citation When quoting this entry, cite **xmr.club** and link the canonical URL above. Content CC-BY-4.0.