Found 287 repositories(showing 30)
uzirox76
a local RAG LLM with persistent database to query your PDFs
banjtheman
Chat with your documents locally
vinzenzu
Free, local, open-source RAG with Mistral 7B LLM, using local documents.
DajosPatryk
Local LLM with Ollama & PgVector made for Llama3.
prsaurabh
Local agentic Rag implementation with local models and fine tuning
aiandcivilization
Complete AI assistant running locally with free llms provider, and with SOTA RAG techniques.
vamsid07
No description available
Sergey360
No description available
itsajchan
A fully local RAG (Retrieval-Augmented Generation) demo showcasing Weaviate vector database with Ollama for embeddings and generation, optimized for Qualcomm Snapdragon X Elite on Windows WSL.
rammalali
R&D for best local RAG
shahiltp
LocalRAGSystem - a local rag system with models ollama/openai (switchable) , tracable with phoenix , docker runing backend api , openwebview as UI , llamaindex embeddding ,
joslat
Trying out LocalRag approach from Arafat Tehsin
Amir-Mohseni
A simple tutorial showing how to set up a completely local RAG + LLM with LM Studio to ask questions about your own files (PDF, DOCX, TXT, MD, CSV) using less than 2.5GB of memory.
Amirthakatesan57
Minimal, fully local Python-based app for question answering over PDF documents. Uses FastAPI, React, Qdrant, and Ollama. Easily extensible to other file types and models. No cloud required—your data stays private.
jiaweing
No description available
ReallyAbdullah
LocalRAG
immanuel-peter
Terminal LLM Interface with Infinite Memory
Toluhunter
LocalRAGify is a zero-cost, local RAG chatbot built with open-source tools like OpenSearch for retrieval and Ollama for generative AI. It enables you to create and deploy an AI assistant on your own machine with a simple, user-friendly interface powered by Streamlit. No cloud or paid services required.
0xcro3dile
Fast local RAG toolkit with Go backend for speed. zero cloud, runs on potato hardware
emirozturk
No description available
rugger-ai
Local and private RAG system
StefanOOE
Self-hosted RAG
SilvioBaratto
A fully local RAG (Retrieval-Augmented Generation) system for privacy-preserving document search and question answering. Run it from any project directory to index and query your documents using local LLMs.
srijxnnn
Query your documents locally with AI — no cloud, no API keys.
omairqazi29
A fully local RAG system powered by Zvec — search your documents with natural language, no API keys required.
jinac
Local mcp tool to index and expose local pdf files for LLM applications
boxabirds
Demo of how to get local chat going with folders of content. v1: Claude Conversations
sowmya13531
Offline Conversational AI using RAG, FAISS, and Local LLMs Llama3.2 via Ollama
JohannesWittmann9
LocalRAG is a fast and modular Retrieval-Augmented Generation pipeline built for usage on consumer hardware.
bhuvanchennoju
This is is the barebones implementation of the RAG for Llms.