Scalable conversational memory via recursive sub-agent delegation — 46% EM vs 5% truncation on LongMemEval-S, zero training
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
13
commits
Add v2 ablation results: adaptive routing scores 40% EM (vs 46% v1)
e49dda1View on GitHubRewrite PAPER.md to match LaTeX paper with all verified results
513ac7cView on GitHubfeat: add RAG baseline (text-embedding-3-small + gpt-4o-mini)
d018f4fView on GitHubfix: correct Hindsight model name (not Gemini-3, open-source 20B+)
3d14550View on GitHubdocs: update README with verified 4s parallel latency (54x speedup)
cc87474View on GitHubdocs: update paper with verified 54x parallel latency results
d08ba4fView on GitHubperf: verify 54x latency reduction with parallel sub-agents
0699962View on GitHubfeat: parallel sub-agent execution via ThreadPoolExecutor
34d696cView on GitHubInitial release: RLM-Memory — scalable conversational memory via sub-agent delegation
6229c7eView on GitHub