Private AI infrastructure — llama.cpp + Open WebUI + SearXNG + RAG stack on AMD RX 6800 XT
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
17
commits
Update startup script comments for Ollama CPU-only configuration
7c5a17bView on GitHubForce Ollama to CPU, re-enable -np 2 with checkpoint fix, and web search detection
f70d22bView on GitHubMove warmup before Pipelines/WebUI and add Ollama auto-retry to startup script
cd0902bView on GitHubFix RAG relevance filtering, add startup sequencing, and -np 2 for parallel slots
6f51e16View on GitHubDisable prompt cache for Qwen 3.5 hybrid arch and document perf findings
34b7393View on GitHubAdd --poll 0 to eliminate idle CPU spin, RAG timing logs, and warmup script
74cd8dbView on GitHubDocument RAG validation results and retrieval tuning roadmap
61659ceView on GitHubAdd RAG setup and Pipelines connection instructions to README
b4af7bdView on GitHubImplement Phase 2 RAG: ingestion pipeline, hybrid search, and Open WebUI integration
f71995aView on GitHub