Found 345 repositories(showing 30)
rohitg00
A curated collection of awesome AI Agents and LLM Apps built with multiple tech stacks, showcasing real-world implementations using OpenAI, Gemini, local models, and various AI frameworks.
SFARPak
Free Local-first Full-Stack AI App Builder & Automation — Build, Test & Deploy with LLMs - Antigravity, Lovable, Bolt opensource Alternative ✨ 🌟 Star if you like it!
zebbern
Full-stack vibe coding platform with model freedom. Any LLM, local or cloud, zero lock-in.
MuratBuker
Local LLM stack with open-source components and documented the installation and configuration steps.
agruai
Local RAG + offline LLM demo with Streamlit chat UI, FAISS vector DB, and LoRA tuning/merge utilities — fully powered by Ollama models running locally. Tech Stack: Python 3.11+ · Streamlit · FAISS · Ollama (local LLM + embeddings) · nomic-embed-text · qwen2.5-instruct · PDF ingestion + chunking · LoRA fine-tuning helpers
vanvuongngo
ClaraN — Privacy-first, fully local AI workspace with Ollama LLM chat, tool calling, agent builder, Stable Diffusion, and embedded n8n-style automation. Backend in Rust. Just your stack, your machines.
dalekurt
A Docker Compose setup for running a local AI and LLM environment with multiple services including AnythingLLM, Flowise, Open WebUI, n8n, and Qdrant.
nixiz0
A fast, lightweight local AI assistant that runs fully on your PC (on a Docker Stack). Write, analyze, summarize, code, and brainstorm with zero data leakage powered by local LLMs and a clean, efficient interface.
bendusy
Full local AI inference stack on Apple Silicon via MLX — LLM, ASR, Embedding, OCR, TTS, Transcription
Alex2Yang97
A full-stack local deep research application built with LangGraph, supporting multiple LLM providers and search APIs. Powered by FastAPI + LangGraph backend and Next.js 15 + React 19 frontend, delivering a modern UI and comprehensive local research solution.
getsimpledirect
AI-Powered Infrastructure Stack — Local LLM inference, vector database, and automated content pipelines.
simple10
Full stack observability for local AI agent development. Provides a llm-proxy to use as LLM provider base URL, logs traces to opik.
elliotdes
Knowledge base management stack using local LLM, Obsidian, and Raycast.
slinusc
Bench360 is a modular benchmarking suite for local LLM deployments. It offers a full-stack, extensible pipeline to evaluate the latency, throughput, quality, and cost of LLM inference on consumer and enterprise GPUs. Bench360 supports flexible backends, tasks and scenarios, enabling fair and reproducible comparisons for researchers & practitioners.
devsnit
⚡ A ready-to-use Dockerized stack combining Open WebUI and LiteLLM — enabling local LLM chat interfaces with support for OpenAI, Anthropic, Groq, DeepSeek, and more. Easily configurable, extendable, and secure.
BonifaceAlexander
Lightweight LLM cost, token, and latency profiler for any Python AI stack — works with OpenAI, Anthropic, Gemini, local LLMs, and custom APIs.
anuragpatil1729
A MERN(MongoDB, Express, React, Node) full stack web application for a real local LLM (not any API Key)
johnson00111
Full-stack job application tracker that auto-classifies Gmail emails using local LLMs (Ollama) and visualizes insights through a React dashboard.
zaidshaikh987
Vibecode Editor is a blazing-fast, AI-integrated web IDE built entirely in the browser using Next.js App Router, WebContainers, Monaco Editor, and local LLMs via Ollama. It offers real-time code execution, an AI-powered chat assistant, and support for multiple tech stacks — all wrapped in a stunning developer-first UI.
rjamestaylor
A complete local LLM stack combining Ollama with Metal acceleration for Apple Silicon and Docker-based Open WebUI. Run powerful language models locally with optimized performance, intuitive interface, and comprehensive management tools for model downloading, system control, and performance monitoring.
Mental health platform. Features local LLM integration (Ollama), real-time mood analysis, and guided meditation. Stack: Python, React, and Docker
be-student-project
A unified industrial AI stack featuring LSTM anomaly detection, adaptive Bayesian optimization, and a local LLM assistant (via MCP) for real-time machine insights.
yongzhenzh
A full-stack chatbot platform that utilizes dual-path Retrieval-Augmented Generation (RAG) for personalized medical Q&A. Built with a FastAPI backend, with both local and online LLM support and a modern Vite-based frontend (Vue). Uses FAISS and BGE embeddings for semantic search over mock user health records and a medical knowledge base.
ByteDecoder
LLM + Ollama + Open WebUI - Docker Compose Stack. Run LLMs in your local
lumduan
Local LLM Stack with vLLM inference engine and Open WebUI
evinbrijesh
AI-powered SSH honeypot with local LLM (Ollama/Phi-3) that generates realistic terminal responses to trap attackers. Includes monitoring stack, session logging, and active defense mechanisms.
rudranaresh0201
Full-stack RAG (Retrieval-Augmented Generation) assistant using FastAPI, React, ChromaDB, and local LLM (Ollama).
ByteTitan-star
A full-stack AI companion platform driven by local LLMs (Ollama). Features UGC character creation, Milvus RAG, real-time voice chat, and Agent tools. Built with React & Django.
saviornt
Local AI Interface is an Electron desktop application designed to streamline your personal AI stack. It provides a unified, user-friendly hub for Dockerized services like Ollama (for LLMs) and n8n (for workflow automation), with built-in dark mode and easy one-command setup.
This repository provides a fully automated solution for deploying a Large Language Model (LLM) environment on a local Mac server using Docker, with strict network isolation and integrated monitoring.