Back to search
Deterministic LLM caching layer with context optimization, LRU eviction, model-aware token counting, and concurrency-safe request deduplication.
Stars
3
Forks
1
Watchers
3
Open Issues
0
Overall repository health assessment
^1.0.22^4.24.1^4.0.0^29.5.11^20.10.6^6.17.0^6.17.0^8.56.0^29.7.0^29.1.1^10.9.2^5.3.3^4.0.01
commits
Initial commit: NeuroCache v1.0.0 with production-safe Context Intelligence
6b00da3View on GitHub