Found 18 repositories(showing 18)
AperturePlus
Semantic level code search/indexer with tree-sitter parsing, Qdrant vector store, and Typer/FastAPI interfaces. Supports calling via MCP.
bluewings1211
A Retrieval-Augmented Generation (RAG) Model-Controller-Provider (MCP) server designed to assist AI agents and developers in understanding and navigating codebases.. It supports incremental indexing and multi-language parsing, enabling LLMs to understand and interact with code.
Brainwires
A Rust-based Model Context Protocol (MCP) server that provides AI assistants with powerful RAG (Retrieval-Augmented Generation) capabilities for understanding massive codebases. Index codebases with FastEmbed + LanceDB, semantic code and git history searches with hybrid vector + BM25.
njrgourav11
Reflyx — a free, open-source AI coding assistant for VS Code. Works like Augment, using only free resources. Runs fully offline with local LLMs (Ollama, LM Studio, DeepSeek, Qwen) or online with GPT-4o/Claude. Indexes your entire codebase with Tree-sitter + embeddings + vector DB. Private, fast, and auto-syncs as you code.
xiaoxu123195
Go-based MCP server for codebase indexing and semantic search (Augment-compatible)
denys-yu
Research codebase for studying chunking strategies in Retrieval-Augmented Generation (RAG), with reproducible experiments, indexing methods, and QA-based evaluation.
DrNightingales
continue-rag: CLI + FastAPI server that indexes your codebase into LanceDB and provides retrieval-augmented search via OpenAI embeddings.
varinguru
A Retrieval Augmented Generation (RAG) application that indexes GitHub repositories and enables natural language queries against codebases. Use FAISS and make use of gemini embeddings
Abhijeet967
An AI-powered Fetch.ai agent that automatically indexes GitHub repositories, generates semantic embeddings, and enables natural language querying over codebases using Retrieval-Augmented Generation (RAG) with ASI:One LLM.
DARSHAN-URS
AI Codebase Copilot is a production-style Retrieval-Augmented Generation (RAG) system that enables developers to intelligently query and explore GitHub repositories using natural language. The system ingests a repository, semantically indexes the codebase, and allows users to ask questions about architecture, logic, and implementation details.
pydevkrishna
RepoFriend is an AI-powered GitHub repository analysis platform built around Retrieval-Augmented Generation (RAG). It indexes repository code, stores vector embeddings in Qdrant, and lets users ask grounded questions about a codebase with file-level references.
syed-qasim5
A Retrieval-Augmented Generation (RAG) system built to solve the problem of navigating large, complex C++ codebases. This tool indexes local files and uses the Gemini API to provide natural language answers about code logic, functions, and architecture.
Light512
Jarvis is a Git-aware, retrieval-augmented coding assistant that understands your codebase at the level of functions, classes, and symbols. It incrementally indexes source code only on Git commits, ensuring embeddings stay consistent with the repository’s true state.
Druva4444
AI Code Coach is a Retrieval-Augmented Generation (RAG) based tool designed to help developers debug, translate, and understand their codebase. It indexes your local Python files and uses a Large Language Model (LLM) to provide context-aware suggestions and explanations.
SebbyC
Project-RAG-CoWorkspace is an AI-powered co-development environment that brings your entire codebase, documentation, and project artifacts into a seamless, interactive workspace. Leveraging a sophisticated Retrieval-Augmented Generation (RAG) pipeline with Qdrant vector indexing and multi-LLM support.
vitcher01
This project is a local Retrieval-Augmented Generation (RAG) tool that indexes your Spring Boot codebase using code-aware embeddings, allowing you to ask an AI assistant complex architectural questions and receive accurate, context-aware answers based on your specific project files.
mouha18
This project is a private, high-performance To-Do application built with a local AI stack. Utilizing an RTX 4050 GPU via LM Studio and AnythingLLM, it employs Retrieval-Augmented Generation (RAG) to index the codebase. This allows the AI to suggest context-aware refactors and features entirely offline.
mreinrt
Tangi is a hardware-agnostic, auto-optimizing AI assistant with Retrieval Augmented Generation (RAG) for codebases. Built on llama-cpp-python with OpenBLAS, it auto-tunes CPU, RAM, and NUMA usage for optimal performance. Features semantic indexing, deep search (/ds), fast lookup (/search), with online and offline, code-aware assistance.
All 18 repositories loaded