Found 145 repositories(showing 30)
google-gemini
A proxy sidecar to access Gemini models via OpenAI and Ollama APIs
Stream29
Proxy remote LLM API as Ollama and LM Studio, for using them in JetBrains AI Assistant
openziti
Zero trust LLM gateway. OpenAI-compatible proxy with semantic routing and load balancing across OpenAI, Anthropic, Ollama, vLLM, and any compatible backend. Identity-based access, virtual API keys, and end-to-end encryption via OpenZiti
xrip
NPX/Docker package that creates Ollama API server and forward requests to Gemni/OpenAI/Deepseek/Kimi K2. Mainly purpose to use Free tier APIs in Jetbrains AI Assistant
dext7r
🚀 Intelligent Ollama API proxy pool based on Cloudflare Workers - 基于 Cloudflare Workers 的智能 Ollama API 代理池,支持多账号轮询、自动故障转移、负载均衡和统一鉴权
eyalrot
A transparent proxy service that maintains the Ollama API interface while forwarding requests to OpenAI-compatible endpoints
prantlf
HTTP proxy for accessing Vertex AI with the REST API interface of ollama or OpenAI. Optionally forwarding requests for other models to ollama. Written in Go.
1LCB
A lightweight HTTP reverse proxy that routes requests to multiple Ollama servers. It includes features like rate limiting, API key validation, security filtering, metrics collection, and hot-reloading of configurations.
vibheksoni
Use any LLM with Claude Code — proxy that translates Anthropic API to OpenAI, Gemini, DeepSeek, Ollama, and more. Full tool calling, streaming, ReAct XML fallback, hot-reload config.
MaxPyx
Ollama-friendly OpenAI Embeddings Proxy. This script bridges the gap between OpenAI's embedding API and Ollama, making it compatible with the current version of Graphrag.
timheide
This application serves as a proxy that implements the Ollama API interface but forwards requests to different LLM providers like Anthropic's Claude and Perplexity AI. This allows IDE plugins that support Ollama to work with these alternative LLM providers.
PaloAltoNetworks
panw-api-ollama is a security proxy that sits between your OpenWebUI interface and Ollama instance. It works by intercepting all prompts and responses, analyzing them with Palo Alto Networks' AI RUNTIME API security technology.
edwardgj
A simple proxy server that enables Qwen Code models to work with GitHub Copilot Chat by mimicking the Ollama API interface.
vhanla
Ollama's API proxy server that captures /api/chat responses and removes <think></think> streamed content
dheavy
Proxy for the Ollama inference server, enhancing the standard API with additional security features suitable for web-based LLM applications.
fslongjin
A simple Ollama API proxy server that automatically adds "/no_think" prefix to all prompts to disable the thinking functionality of Qwen3 model.
BretFisher
A Docker Model Runner proxy so it'll look more like Ollama's API
System233
A Proxy for Github Copilot that converts OpenAI-compatible code to the Ollama API.
ArthurFranckPat
Proxy server that mimics Ollama's API but routes requests to Claude Code CLI for Zed integration
stevelittlefish
LLM logging and translation proxy - exposes an ollama compatible API and proxies requests onto either an Ollama server or an OpenAI compatible server (i.e. llama.cpp). Logs all messages and allows easy integration of Home Assistant with llama.cpp.
WebDevSachin
Complete deployment solution for Qwen3-Coder (30B/480B) on RunPod with Ollama + LiteLLM proxy. Features secure OpenAI-compatible API endpoint with authentication, persistent storage configuration, automated backups, and VS Code integration. Perfect for AI-powered development workflows.
AstroAir
A modern, high-performance proxy server that translates Ollama API calls to OpenRouter.
micheleminardidev
A lightweight Node.js + Express proxy that lets you use Ollama Turbo with any software or tool that already works with the original Ollama API — no code changes required.
wrtx-dev
Proxy Ollama API requests as OpenAI-compatible API requests.
mazen160
AWS Bedrock API Proxy Server: Interact with AWS Bedrock models through a standardized OLLAMA API format
PJ-568
A secure reverse proxy that adds authentication layer to Ollama API using Caddy server. Protect your Ollama instance with Bearer Token authentication. In short, use key to access your Ollama API.
rewolf
Very simple proxy for measuring OpenAI API requests on any compatible service (ollama, vLLM, etc)
NickScherbakov
A simple node.js proxy and html page for testing API endpoints of ollama's servers
TheSethRose
Python-based proxy server that emulates the Ollama REST API but forwards requests to the OpenRouter API. It allows tools designed for Ollama to leverage models available through OpenRouter, supporting chat completions, model listing, and streaming.
BarricadeDev
Converts the Github Copilot API Endpoints from LM Proxy into Ollama compatible ones therefore you can use the other models in Copilot Chat Such as GPT-4, GPT 4o Mini and GPT 3.5 Turbo if you want.