Found 12 repositories(showing 12)
docusealco
Ruby FFI bindings for llama.cpp to run open-source LLMs such as GPT-OSS, Qwen 3.5, Gemma 4, and Llama 3 locally with Ruby.
kantan-kanto
Local LLM session nodes for ComfyUI using GGUF and llama.cpp, supporting Llama, Mistral, Qwen, DeepSeek, GLM, Gemma, Phi, LLaVA and gpt-oss, enabling both user–model chat and model-to-model dialogue without external runtimes like Ollama.
feers77
Llama.cpp fork with implemented engram technology to run models without GPU. This proof of concept runs gpt-oss:120b only using cpu and ram.
sovit-123
A local RAG + web search pipeline with gpt-oss and other similar scale models powered by llama.cpp
PunithVT
AI-Powered Inference Platform - Deploy OpenAI's GPT-OSS-20B on AWS EC2 with GPU acceleration using llama.cpp.
rick-stevens-ai
Run OpenAI GPT-OSS-120B (116.83B params, 60GB) on SINGLE Intel MAX GPU 1550 using llama.cpp SYCL backend
jefripunza
No description available
escape-velocity-ai
A general docker container for running openai gpt oss models using llama.cpp
stevenke1981
llama.cpp deployment scripts for GPT-OSS 20B GGUF model (Windows & Linux)
dvrlabs
Basic CLI to local gpt-oss LLM running in llama.cpp. Made with Odin.
nefaereti
One-command installer and uninstaller for GPT-OSS 20B HERETIC uncensored AI model. Automatically downloads, verifies, and runs locally on Windows with llama.cpp.
aman-chauhan
Offline paper-reading companion using llama.cpp + GPT-OSS + Python. Helps you locate evidence, summarize sections, and build your own notes while keeping analysis local.
All 12 repositories loaded