Back to search
A full-featured AI assistant that runs entirely offline using llama.cpp, optimized for low-latency, high-accuracy inference on consumer-grade hardware
Stars
10
Forks
0
Watchers
10
Open Issues
1
Overall repository health assessment
^2.6.0^19.1.6^0.525.0^19.1.0^5.0.6^2.6.251
commits
Setup connexion between backend and the front. (#48)
7b524f6View on GitHubConstruct the bridge between front and back (#39)
7aea83aView on GitHubfeat: Complete Asynchronous Refactor & Architectural Hardening (#33)
39c7856View on GitHub