Found 15 repositories(showing 15)
ikawrakow
llama.cpp fork with additional SOTA quants and improved performance
raketenkater
Smart launcher for llama.cpp / ik_llama.cpp — auto-detects GPUs, optimizes MoE placement, crash recovery
Clarit-AI
Synapse is a high-performance llama.cpp fork built on ik-llama.cpp and rk-llama.cpp, focused on efficient AI inference and deployment on edge devices.
ThomasBaruzier
Archive of ikawrakow/ik_llama.cpp. Made with ghbkp.
kim90000
No description available
PieBru
Cloned from the original ik_llama.cpp before it disappeared (404). Note: The last commits before it disappeared contained the implementation of iq1_kt - can be found here: https://github.com/Thireus/ik_llama.cpp/commit/87fd730bfa934a38f02b8e65afbe2538e92403fd
hchengit
Fork of ik_llama.cpp
ProgenyAlpha
Transplant upstream llama.cpp DeltaNet implementation into ik_llama.cpp PR #1251
creativebalance
No description available
jdvpro
No description available
pt13762104
LLM inference in C/C++ (with patches for TU11x devices)
mo79571830
No description available
neuromaniacMD
ROCm HIP port patches for ik_llama.cpp — 90% compiled, 10 fixes for AMD GPU compilation
jordandevai
An OpenAI-Compatible proxy that enables reliable tool calling for models served by IK_LLama.cpp
ProgenyAlpha
Debugging fused DeltaNet CPU kernel race condition in ik_llama.cpp PR #1251 for Qwen3-Coder-Next
All 15 repositories loaded