A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM
Stars
13
Forks
0
Watchers
Open Issues
Overall repository health assessment
No package.json found
This might not be a Node.js project
User
commits
print exit code
d7c822d
update readme
b8766eb
add template overrides
a5208a3
check llama_cache dir
2b462b7
list models and fix loading loop
be02fd9
rewrite for huggingface zero config llama.cpp
699b567
rewrite readme
6f67a76
mention Arm
6f57aee
support vllm
53e5ea2
Create LICENSE
f7e76bd
readme
97838a9
add server
28537a4
first commit
0fbd779