Back to search
Improved TurboQuant quantization for llama.cpp — adding QJL residual, residual window, asymmetric K/V to turbo-tan fork
Stars
0
Forks
0
Watchers
0
Open Issues
2
Overall repository health assessment
No package.json found
This might not be a Node.js project
6
commits
Research-validated plan: drop QJL for KV, add KIVI/RotateKV/KITTY techniques
5956784View on GitHubAlign all docs: fork improvement project, not just model quantization
15c100cView on GitHubPhase 3: Implement QJL 1-bit residual correction (TQ3_QJL type)
af226dfView on GitHub