End-to-end documentation to set up your own local & fully private LLM server on Debian. Equipped with chat, web search, RAG, model management, MCP servers, image generation, and TTS.
Stars
730
Forks
55
Watchers
730
Open Issues
1
Overall repository health assessment
No language data available
No package.json found
This might not be a Node.js project
24
commits
1
commits
Add subsections (add user to Docker group, harden Docker containers), update Docker-related sections
7110e77View on GitHubUpdate Docker/HF commands, add minor subsections (Docker/About/General), restructure sections
bc962bbView on GitHubAdd llama.cpp and LLM service sections, update HF section, consolidate redundant sections
bbb9d76View on GitHubAdd Inference Engine section with vLLM docs, refactor sections for simplicity
7535a0aView on GitHubAdd ComfyUI instructions, make general minor improvements
6cfed77View on GitHub