xmr.club ask search guides + submit independent · curated · graded
← back home

Compare providers

Side-by-side comparison for any two listings. Pick by provider id (e.g. mullvad, ivpn, feather). Updates live as curators re-grade.

llama.cppContinue.dev
Categoryaiai
SubcategoryLocal RuntimeCoding Assistant
GradeAA
Editor's Pick
TaglineC++ runtime for running LLMs locally on CPU + GPU. The backbone of every privacy-LLM stack.Open-source VS Code / JetBrains AI extension. Point at any local or cloud model, no vendor lock-in.
FeesFree · MIT · C++ · CPU/CUDA/ROCm/MetalFree · Apache · VS Code / JetBrains
KYC postureanonymous_signupanonymous_signup
Highlight tagsLOCALOPEN-SOURCEREFERENCELOCALOPEN-SOURCEIDE
Feature tagsnon_custodialopen_sourceself_hostedcli_supportednon_custodialopen_sourceself_hostedapi_available
Webhttps://github.com/ggml-org/llama.cpphttps://continue.dev
Tor
Last verified2026-05-132026-05-13