AI / LLMs — privacy-respecting options » Local Runtime
A Ollama
● up · checked 41m ago
Run open-source LLMs locally — no network, no API key, no telemetry.
At a glance
- no-KYC signup
- Open-source codebase
- Self-hostable
Review
The reference local-LLM runtime. Pulls quantised model weights, exposes an OpenAI-compatible API on localhost. Works on CPU + GPU + Apple Silicon. The strongest privacy posture available because the inference never leaves your machine.
Fees
Free · MIT · runtime + model weights local
Links
Audit trail — receipts for the editorial claim
- Upstream up · HTTP 200 · 318ms · checked 41m ago
- No
.onionmirror listed - Last manual verification
2026-05-12(<7d) - See curator log for Ollama
Reviews — moderated · rules
No approved reviews yet. Be the first.
Add a review
Honest, brand-neutral feedback welcome. A curator approves before it appears here. No JS, no signup required.