cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.
Stars
43
Forks
14
Watchers
43
Open Issues
23
Overall repository health assessment
No package.json found
This might not be a Node.js project
140
commits
92
commits
83
commits
35
commits
17
commits
12
commits
7
commits
Update llama.cpp submodule to latest release b5359 (#481)
045c8e7View on GitHubMerge pull request #470 from menloresearch/update-submodule-2025-04-29-17-00
4d82f2eView on GitHubUpdate llama.cpp submodule to latest release b5205 (#468)
c780a56View on GitHubUpdate llama.cpp submodule to latest release b4963 (#440)
a73aa5aView on GitHubMerge pull request #424 from janhq/update-submodule-2025-03-11-17-00
ef1f1d3View on GitHubMerge pull request #420 from janhq/update-submodule-2025-03-06-17-00
de6a4ddView on GitHubMerge pull request #419 from janhq/update-submodule-2025-03-05-17-00
07a93e6View on GitHubMerge pull request #418 from janhq/update-submodule-2025-03-04-17-01
4284fb7View on GitHub