LLM-agnostic memory layer for AI agents. No embeddings, no vector DB — just fast, structured, temporal memory that any LLM can consume as plain text.
Stars
6
Forks
0
Watchers
6
Open Issues
11
Overall repository health assessment
No package.json found
This might not be a Node.js project
43
commits
Merge pull request #37 from Basekick-Labs/fix/docker-healthcheck
9c9b5d2View on GitHubfix(docker): use 127.0.0.1 in healthcheck instead of localhost
9af1564View on GitHubMerge pull request #36 from Basekick-Labs/feat/openai-cookbook
82168b7View on GitHubfeat: add docker-compose for Memtrace with Traefik
7bfe430View on GitHubMerge pull request #35 from Basekick-Labs/feat/openai-cookbook
9c0f945View on GitHubdocs: move How It Works and Documentation sections higher in README
cc7f788View on GitHubdocs: update cross-references for OpenAI cookbook
cf4d1faView on GitHubfeat(examples): add OpenAI API + Memtrace cookbook
b1b97d3View on GitHubMerge pull request #34 from Basekick-Labs/feat/sdk-list-sessions
8477acaView on GitHubfeat(sdk): add list_sessions to Python SDK
33b86a4View on GitHubMerge pull request #33 from Basekick-Labs/feat/telegram-support-example
997f683View on GitHubMerge pull request #31 from Basekick-Labs/fix/log-memory-400
94b1f95View on GitHubfeat(examples): add Telegram customer support demo
75d7206View on GitHubfix(api): log warning on memory create validation failures
9dccd53View on GitHubMerge pull request #30 from Basekick-Labs/feat/handler-logging
58874ddView on GitHub