⚡💾 Vectro — Compress LLM embeddings 🧠🚀 Save memory, speed up retrieval, and keep semantic accuracy 🎯✨ Lightning-fast quantization for Python + Mojo, vector DB friendly 🗄️, and perfect for RAG pipelines, AI research, and devs who want smaller, faster embeddings 📊💡
Stars
6
Forks
0
Watchers
6
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
95
commits
5
commits
feat(perf): Phase 21 - Criterion IVF/PQ benchmarks, vacuum(), search_filtered, search_for_recall
78ba741View on GitHubfeat(perf): Phase 20 - IVF indexes, proptest, NF4 AVX2, Python+JS bindings
5eb5652View on GitHubfeat(perf): Phase 19 — SimSIMD engine, HNSW production features, BF16 quant
9a3902fView on GitHubfeat: Phase 18 complete - v4.0.0 packaging, docs, and public release
b5b1fd9View on GitHubfeat(rust): Phase 17 performance recovery — NEON INT8, numpy bridge, Criterion benches
9880bc8View on GitHubfeat(rust): Phase 16 algorithm parity — INT8/NF4/Binary/PQ/HNSW
e04a46bView on GitHubfeat(rust): Phase 1 complete — absorb vectro-plus Rust workspace into vectro
8d7ba8eView on GitHubfeat(perf): v3.6.0 — full optimization + multi-benchmark suite
31694f7View on GitHubchore(release): bump to v3.5.0 — Mojo 4.85× faster than FAISS C++
3ae9c9bView on GitHubperf(v3.5.0): SIMD_W=16 + resize() init → Vectro 4.85× faster than FAISS
4de89dfView on GitHubfix(benchmark): fix Mojo stdout parser and stale backend label
f38dcd6View on GitHubdocs: Add Mojo benchmark setup guide and runner script
53f5cb9View on GitHub