Run open-source LLMs locally — no network, no API key, no telemetry.
Free · MIT · runtime + model weights local
LLM access without surveillance. Confidential-compute hosted (TEE), local runtimes, crypto-paid API gateways. Order matters: local first, then confidential-compute, then crypto-paid proxies, then mainstream APIs with email-only signup.
last reviewed 2026-05-13
› AI / LLMs — privacy-respecting options » Local Runtime 4Run open-source LLMs locally — no network, no API key, no telemetry.
Free · MIT · runtime + model weights local
C++ runtime for running LLMs locally on CPU + GPU. The backbone of every privacy-LLM stack.
Free · MIT · C++ · CPU/CUDA/ROCm/Metal
Desktop GUI for local LLMs. Drag-drop GGUF models, OpenAI-compatible local API, no cloud.
Free for personal · MacOS/Win/Linux · localhost API
Open-source ChatGPT alternative running locally. Free, offline-first, model marketplace.
Free · MIT · MacOS/Win/Linux · OpenAI-compat API