Build a model-agnostic semantic compression layer that reduces token count for LLM inputs (memories, code, context) while preserving task performance across Claude, GPT, and Gemini.
Stars
6
Forks
1
Watchers
6
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
119
commits
25
commits
feat: add rich console output to downstream eval, populate README with extrinsic results
3914b07View on GitHubdocs: restructure README with intrinsic/extrinsic evaluation framing
687a483View on GitHubMerge pull request #24 from Sudhendra/downstream-eval-framework
77b3619View on GitHubfix: allow completed downstream resumes without tinker key
44f67b8View on GitHubdocs: add Nanbeige 3B equivalence results (1.3% pass rate, 3-gate breakdown)
13349c9View on GitHubdocs: add equivalence eval 3-gate scoring diagram to README
d8f09c7View on GitHubfeat: 3-gate evaluation system, Qwen3-8B equiv results (2.0% pass rate)
42b62f7View on GitHub