AI / LLMs — privacy-respecting options » Local Runtime
A llama.cpp
● up · checked 6h ago verified 3 days ago
C++ runtime for running LLMs locally on CPU + GPU. The backbone of every privacy-LLM stack.
At a glance
- no-KYC signup
- Open-source codebase
- Non-custodial — you hold keys
- Self-hostable
Review
The reference inference engine for running Llama / Mistral / Qwen / DeepSeek family models on local hardware. Quantised GGUF formats let a 30B model run on a consumer GPU. Most privacy-focused LLM products (Ollama, LM Studio, Jan) wrap this. Apple Silicon, NVIDIA, AMD ROCm all supported.
Fees
Free · MIT · C++ · CPU/CUDA/ROCm/Metal
Links
Why A? — rubric definition
Best evidence tier. Signup tested end-to-end by xmr.club curator — deposit + withdrawal + edge cases. No-KYC posture verified at retail volume. Last_verified within 12 months. Full rubric + worked example at /methodology; the curator's reasoning for this specific listing is in the audit log.
Audit trail — receipts for the editorial claim
- Upstream up · HTTP 200 · 655ms · checked 6h ago
- No
.onionmirror listed - Last manual verification
2026-05-13(<7d) - See curator log for llama.cpp
Reviews — moderated · rules
No approved reviews yet. Be the first.
Add a review
Honest, brand-neutral feedback welcome. A curator approves before it appears here. No JS, no signup required.